Oct 2 19:18:19.837297 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:18:19.837317 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:18:19.837325 kernel: BIOS-provided physical RAM map: Oct 2 19:18:19.837330 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 19:18:19.837335 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 19:18:19.837341 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 19:18:19.837347 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Oct 2 19:18:19.837353 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Oct 2 19:18:19.837359 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 2 19:18:19.837365 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 19:18:19.837370 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 2 19:18:19.837375 kernel: NX (Execute Disable) protection: active Oct 2 19:18:19.837380 kernel: SMBIOS 2.8 present. Oct 2 19:18:19.837388 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 2 19:18:19.837399 kernel: Hypervisor detected: KVM Oct 2 19:18:19.837408 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:18:19.837415 kernel: kvm-clock: cpu 0, msr 56f8a001, primary cpu clock Oct 2 19:18:19.837422 kernel: kvm-clock: using sched offset of 2200565205 cycles Oct 2 19:18:19.837431 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:18:19.837438 kernel: tsc: Detected 2794.748 MHz processor Oct 2 19:18:19.837445 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:18:19.837451 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:18:19.837457 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Oct 2 19:18:19.837465 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:18:19.837471 kernel: Using GB pages for direct mapping Oct 2 19:18:19.837477 kernel: ACPI: Early table checksum verification disabled Oct 2 19:18:19.837483 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Oct 2 19:18:19.837489 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:18:19.837495 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:18:19.837501 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:18:19.837507 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 2 19:18:19.837513 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:18:19.837520 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:18:19.837526 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:18:19.837533 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Oct 2 19:18:19.837541 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Oct 2 19:18:19.837549 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 2 19:18:19.837556 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Oct 2 19:18:19.837562 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Oct 2 19:18:19.837570 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Oct 2 19:18:19.837584 kernel: No NUMA configuration found Oct 2 19:18:19.837598 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Oct 2 19:18:19.837605 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Oct 2 19:18:19.837611 kernel: Zone ranges: Oct 2 19:18:19.837618 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:18:19.837624 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Oct 2 19:18:19.837632 kernel: Normal empty Oct 2 19:18:19.837638 kernel: Movable zone start for each node Oct 2 19:18:19.837645 kernel: Early memory node ranges Oct 2 19:18:19.837651 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 19:18:19.837657 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Oct 2 19:18:19.837664 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Oct 2 19:18:19.837670 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:18:19.837676 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 19:18:19.837683 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Oct 2 19:18:19.837690 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 2 19:18:19.837696 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:18:19.837703 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:18:19.837709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 19:18:19.837716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:18:19.837722 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:18:19.837728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:18:19.837734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:18:19.837741 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:18:19.837748 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:18:19.837755 kernel: TSC deadline timer available Oct 2 19:18:19.837761 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 2 19:18:19.837767 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 2 19:18:19.837773 kernel: kvm-guest: setup PV sched yield Oct 2 19:18:19.837780 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Oct 2 19:18:19.837786 kernel: Booting paravirtualized kernel on KVM Oct 2 19:18:19.837793 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:18:19.837799 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 2 19:18:19.837805 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Oct 2 19:18:19.837813 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Oct 2 19:18:19.837819 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 2 19:18:19.837825 kernel: kvm-guest: setup async PF for cpu 0 Oct 2 19:18:19.837831 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Oct 2 19:18:19.837837 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:18:19.837844 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:18:19.837850 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Oct 2 19:18:19.837856 kernel: Policy zone: DMA32 Oct 2 19:18:19.837864 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:18:19.837872 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:18:19.837878 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:18:19.837884 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:18:19.837891 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:18:19.837898 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 132728K reserved, 0K cma-reserved) Oct 2 19:18:19.837904 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:18:19.837911 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:18:19.837917 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:18:19.837925 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:18:19.837932 kernel: rcu: RCU event tracing is enabled. Oct 2 19:18:19.837938 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:18:19.837944 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:18:19.837951 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:18:19.837957 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:18:19.837964 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:18:19.837970 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 2 19:18:19.837977 kernel: random: crng init done Oct 2 19:18:19.837988 kernel: Console: colour VGA+ 80x25 Oct 2 19:18:19.837997 kernel: printk: console [ttyS0] enabled Oct 2 19:18:19.838005 kernel: ACPI: Core revision 20210730 Oct 2 19:18:19.838013 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 2 19:18:19.838019 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:18:19.838025 kernel: x2apic enabled Oct 2 19:18:19.838031 kernel: Switched APIC routing to physical x2apic. Oct 2 19:18:19.838038 kernel: kvm-guest: setup PV IPIs Oct 2 19:18:19.838044 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:18:19.838052 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 19:18:19.838058 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 2 19:18:19.838064 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 2 19:18:19.838071 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 2 19:18:19.838077 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 2 19:18:19.838083 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:18:19.838090 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:18:19.838096 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:18:19.838103 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:18:19.838116 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 2 19:18:19.838123 kernel: RETBleed: Mitigation: untrained return thunk Oct 2 19:18:19.838129 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:18:19.838138 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:18:19.838144 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:18:19.838151 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:18:19.838158 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:18:19.838164 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:18:19.838171 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:18:19.838179 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:18:19.838186 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:18:19.838192 kernel: LSM: Security Framework initializing Oct 2 19:18:19.838199 kernel: SELinux: Initializing. Oct 2 19:18:19.838206 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:18:19.838212 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:18:19.838219 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 2 19:18:19.838238 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 2 19:18:19.838245 kernel: ... version: 0 Oct 2 19:18:19.838252 kernel: ... bit width: 48 Oct 2 19:18:19.838258 kernel: ... generic registers: 6 Oct 2 19:18:19.838265 kernel: ... value mask: 0000ffffffffffff Oct 2 19:18:19.838273 kernel: ... max period: 00007fffffffffff Oct 2 19:18:19.838282 kernel: ... fixed-purpose events: 0 Oct 2 19:18:19.838291 kernel: ... event mask: 000000000000003f Oct 2 19:18:19.838300 kernel: signal: max sigframe size: 1776 Oct 2 19:18:19.838306 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:18:19.838315 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:18:19.838324 kernel: x86: Booting SMP configuration: Oct 2 19:18:19.838334 kernel: .... node #0, CPUs: #1 Oct 2 19:18:19.838341 kernel: kvm-clock: cpu 1, msr 56f8a041, secondary cpu clock Oct 2 19:18:19.838348 kernel: kvm-guest: setup async PF for cpu 1 Oct 2 19:18:19.838354 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Oct 2 19:18:19.838361 kernel: #2 Oct 2 19:18:19.838368 kernel: kvm-clock: cpu 2, msr 56f8a081, secondary cpu clock Oct 2 19:18:19.838374 kernel: kvm-guest: setup async PF for cpu 2 Oct 2 19:18:19.838382 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Oct 2 19:18:19.838389 kernel: #3 Oct 2 19:18:19.838396 kernel: kvm-clock: cpu 3, msr 56f8a0c1, secondary cpu clock Oct 2 19:18:19.838402 kernel: kvm-guest: setup async PF for cpu 3 Oct 2 19:18:19.838409 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Oct 2 19:18:19.838416 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:18:19.838422 kernel: smpboot: Max logical packages: 1 Oct 2 19:18:19.838429 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 2 19:18:19.838437 kernel: devtmpfs: initialized Oct 2 19:18:19.838448 kernel: x86/mm: Memory block size: 128MB Oct 2 19:18:19.838457 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:18:19.838465 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:18:19.838472 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:18:19.838478 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:18:19.838487 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:18:19.838496 kernel: audit: type=2000 audit(1696274299.834:1): state=initialized audit_enabled=0 res=1 Oct 2 19:18:19.838504 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:18:19.838511 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:18:19.838520 kernel: cpuidle: using governor menu Oct 2 19:18:19.838527 kernel: ACPI: bus type PCI registered Oct 2 19:18:19.838534 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:18:19.838540 kernel: dca service started, version 1.12.1 Oct 2 19:18:19.838547 kernel: PCI: Using configuration type 1 for base access Oct 2 19:18:19.838554 kernel: PCI: Using configuration type 1 for extended access Oct 2 19:18:19.838561 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:18:19.838568 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:18:19.838574 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:18:19.838582 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:18:19.838589 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:18:19.838602 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:18:19.838609 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:18:19.838616 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:18:19.838623 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:18:19.838630 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:18:19.838636 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:18:19.838643 kernel: ACPI: Interpreter enabled Oct 2 19:18:19.838650 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:18:19.838658 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:18:19.838665 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:18:19.838672 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 19:18:19.838678 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:18:19.838792 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:18:19.838803 kernel: acpiphp: Slot [3] registered Oct 2 19:18:19.838810 kernel: acpiphp: Slot [4] registered Oct 2 19:18:19.838817 kernel: acpiphp: Slot [5] registered Oct 2 19:18:19.838826 kernel: acpiphp: Slot [6] registered Oct 2 19:18:19.838832 kernel: acpiphp: Slot [7] registered Oct 2 19:18:19.838839 kernel: acpiphp: Slot [8] registered Oct 2 19:18:19.838846 kernel: acpiphp: Slot [9] registered Oct 2 19:18:19.838852 kernel: acpiphp: Slot [10] registered Oct 2 19:18:19.838859 kernel: acpiphp: Slot [11] registered Oct 2 19:18:19.838866 kernel: acpiphp: Slot [12] registered Oct 2 19:18:19.838872 kernel: acpiphp: Slot [13] registered Oct 2 19:18:19.838879 kernel: acpiphp: Slot [14] registered Oct 2 19:18:19.838887 kernel: acpiphp: Slot [15] registered Oct 2 19:18:19.838893 kernel: acpiphp: Slot [16] registered Oct 2 19:18:19.838900 kernel: acpiphp: Slot [17] registered Oct 2 19:18:19.838906 kernel: acpiphp: Slot [18] registered Oct 2 19:18:19.838913 kernel: acpiphp: Slot [19] registered Oct 2 19:18:19.838919 kernel: acpiphp: Slot [20] registered Oct 2 19:18:19.838926 kernel: acpiphp: Slot [21] registered Oct 2 19:18:19.838933 kernel: acpiphp: Slot [22] registered Oct 2 19:18:19.838939 kernel: acpiphp: Slot [23] registered Oct 2 19:18:19.838946 kernel: acpiphp: Slot [24] registered Oct 2 19:18:19.838955 kernel: acpiphp: Slot [25] registered Oct 2 19:18:19.838963 kernel: acpiphp: Slot [26] registered Oct 2 19:18:19.838972 kernel: acpiphp: Slot [27] registered Oct 2 19:18:19.838981 kernel: acpiphp: Slot [28] registered Oct 2 19:18:19.838989 kernel: acpiphp: Slot [29] registered Oct 2 19:18:19.838996 kernel: acpiphp: Slot [30] registered Oct 2 19:18:19.839002 kernel: acpiphp: Slot [31] registered Oct 2 19:18:19.839009 kernel: PCI host bridge to bus 0000:00 Oct 2 19:18:19.839110 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:18:19.839175 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:18:19.839248 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:18:19.839307 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Oct 2 19:18:19.839365 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 2 19:18:19.839423 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:18:19.839503 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:18:19.839603 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:18:19.839686 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 19:18:19.839754 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Oct 2 19:18:19.839821 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:18:19.839911 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:18:19.839982 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:18:19.840049 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:18:19.840127 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:18:19.840253 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 2 19:18:19.840330 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 2 19:18:19.840402 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Oct 2 19:18:19.840467 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 2 19:18:19.840532 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 2 19:18:19.840612 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 2 19:18:19.840680 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:18:19.840754 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:18:19.840837 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:18:19.840919 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 2 19:18:19.840991 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 2 19:18:19.841082 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 19:18:19.841168 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 19:18:19.841258 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 2 19:18:19.841326 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 2 19:18:19.841444 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:18:19.841654 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Oct 2 19:18:19.841758 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 2 19:18:19.841856 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 2 19:18:19.841963 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 2 19:18:19.841977 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:18:19.841985 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:18:19.841992 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:18:19.842001 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:18:19.842010 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:18:19.842020 kernel: iommu: Default domain type: Translated Oct 2 19:18:19.842030 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:18:19.842110 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 19:18:19.842182 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:18:19.842269 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 19:18:19.842279 kernel: vgaarb: loaded Oct 2 19:18:19.842286 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:18:19.842294 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:18:19.842301 kernel: PTP clock support registered Oct 2 19:18:19.842308 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:18:19.842315 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:18:19.842332 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 19:18:19.842339 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Oct 2 19:18:19.842347 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 2 19:18:19.842354 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 2 19:18:19.842362 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:18:19.842369 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:18:19.842377 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:18:19.842384 kernel: pnp: PnP ACPI init Oct 2 19:18:19.842459 kernel: pnp 00:02: [dma 2] Oct 2 19:18:19.842472 kernel: pnp: PnP ACPI: found 6 devices Oct 2 19:18:19.842479 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:18:19.842486 kernel: NET: Registered PF_INET protocol family Oct 2 19:18:19.842494 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:18:19.842501 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:18:19.842508 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:18:19.842515 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:18:19.842523 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:18:19.842531 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:18:19.842538 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:18:19.842545 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:18:19.842553 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:18:19.842559 kernel: NET: Registered PF_XDP protocol family Oct 2 19:18:19.842640 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:18:19.842739 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:18:19.842818 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:18:19.842899 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Oct 2 19:18:19.842979 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 2 19:18:19.843084 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 19:18:19.843187 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:18:19.843280 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:18:19.843290 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:18:19.843297 kernel: Initialise system trusted keyrings Oct 2 19:18:19.843304 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:18:19.843311 kernel: Key type asymmetric registered Oct 2 19:18:19.843322 kernel: Asymmetric key parser 'x509' registered Oct 2 19:18:19.843329 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:18:19.843336 kernel: io scheduler mq-deadline registered Oct 2 19:18:19.843343 kernel: io scheduler kyber registered Oct 2 19:18:19.843350 kernel: io scheduler bfq registered Oct 2 19:18:19.843358 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:18:19.843366 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:18:19.843373 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:18:19.843380 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:18:19.843388 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:18:19.843395 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:18:19.843402 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:18:19.843410 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:18:19.843417 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:18:19.843487 kernel: rtc_cmos 00:05: RTC can wake from S4 Oct 2 19:18:19.843498 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:18:19.843558 kernel: rtc_cmos 00:05: registered as rtc0 Oct 2 19:18:19.843628 kernel: rtc_cmos 00:05: setting system clock to 2023-10-02T19:18:19 UTC (1696274299) Oct 2 19:18:19.843689 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 2 19:18:19.843698 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:18:19.843705 kernel: Segment Routing with IPv6 Oct 2 19:18:19.843712 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:18:19.843719 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:18:19.843726 kernel: Key type dns_resolver registered Oct 2 19:18:19.843733 kernel: IPI shorthand broadcast: enabled Oct 2 19:18:19.843740 kernel: sched_clock: Marking stable (350127415, 95503153)->(472956142, -27325574) Oct 2 19:18:19.843751 kernel: registered taskstats version 1 Oct 2 19:18:19.843760 kernel: Loading compiled-in X.509 certificates Oct 2 19:18:19.843770 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:18:19.843779 kernel: Key type .fscrypt registered Oct 2 19:18:19.843786 kernel: Key type fscrypt-provisioning registered Oct 2 19:18:19.843794 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:18:19.843801 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:18:19.843808 kernel: ima: No architecture policies found Oct 2 19:18:19.843815 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:18:19.843824 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:18:19.843832 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:18:19.843839 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:18:19.843846 kernel: Run /init as init process Oct 2 19:18:19.843854 kernel: with arguments: Oct 2 19:18:19.843861 kernel: /init Oct 2 19:18:19.843868 kernel: with environment: Oct 2 19:18:19.843885 kernel: HOME=/ Oct 2 19:18:19.843892 kernel: TERM=linux Oct 2 19:18:19.843901 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:18:19.843911 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:18:19.843922 systemd[1]: Detected virtualization kvm. Oct 2 19:18:19.843930 systemd[1]: Detected architecture x86-64. Oct 2 19:18:19.843937 systemd[1]: Running in initrd. Oct 2 19:18:19.843945 systemd[1]: No hostname configured, using default hostname. Oct 2 19:18:19.843952 systemd[1]: Hostname set to . Oct 2 19:18:19.843962 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:18:19.843969 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:18:19.843977 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:18:19.843984 systemd[1]: Reached target cryptsetup.target. Oct 2 19:18:19.843992 systemd[1]: Reached target paths.target. Oct 2 19:18:19.844001 systemd[1]: Reached target slices.target. Oct 2 19:18:19.844012 systemd[1]: Reached target swap.target. Oct 2 19:18:19.844021 systemd[1]: Reached target timers.target. Oct 2 19:18:19.844031 systemd[1]: Listening on iscsid.socket. Oct 2 19:18:19.844038 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:18:19.844046 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:18:19.844054 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:18:19.844062 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:18:19.844071 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:18:19.844081 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:18:19.844094 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:18:19.844108 systemd[1]: Reached target sockets.target. Oct 2 19:18:19.844118 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:18:19.844126 systemd[1]: Finished network-cleanup.service. Oct 2 19:18:19.844135 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:18:19.844142 systemd[1]: Starting systemd-journald.service... Oct 2 19:18:19.844151 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:18:19.844160 systemd[1]: Starting systemd-resolved.service... Oct 2 19:18:19.844168 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:18:19.844176 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:18:19.844183 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:18:19.844191 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:18:19.844199 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:18:19.844211 systemd-journald[198]: Journal started Oct 2 19:18:19.844269 systemd-journald[198]: Runtime Journal (/run/log/journal/ffdb050014d742c4bf086803fce6219b) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:18:19.835877 systemd-modules-load[199]: Inserted module 'overlay' Oct 2 19:18:19.861188 kernel: audit: type=1130 audit(1696274299.857:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.861249 systemd[1]: Started systemd-journald.service. Oct 2 19:18:19.861263 kernel: audit: type=1130 audit(1696274299.860:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.858018 systemd-resolved[200]: Positive Trust Anchors: Oct 2 19:18:19.868268 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:18:19.868287 kernel: audit: type=1130 audit(1696274299.864:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.868298 kernel: Bridge firewalling registered Oct 2 19:18:19.868307 kernel: audit: type=1130 audit(1696274299.868:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.858025 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:18:19.858051 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:18:19.860253 systemd-resolved[200]: Defaulting to hostname 'linux'. Oct 2 19:18:19.861286 systemd[1]: Started systemd-resolved.service. Oct 2 19:18:19.865046 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:18:19.868260 systemd-modules-load[199]: Inserted module 'br_netfilter' Oct 2 19:18:19.868317 systemd[1]: Reached target nss-lookup.target. Oct 2 19:18:19.871216 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:18:19.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.883465 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:18:19.888696 kernel: audit: type=1130 audit(1696274299.884:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.888728 kernel: SCSI subsystem initialized Oct 2 19:18:19.885604 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:18:19.894168 dracut-cmdline[216]: dracut-dracut-053 Oct 2 19:18:19.895700 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:18:19.927891 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:18:19.927939 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:18:19.927953 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:18:19.930632 systemd-modules-load[199]: Inserted module 'dm_multipath' Oct 2 19:18:19.931539 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:18:19.933017 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:18:19.937143 kernel: audit: type=1130 audit(1696274299.931:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.937162 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:18:19.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.941704 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:18:19.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.945268 kernel: audit: type=1130 audit(1696274299.942:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.951258 kernel: iscsi: registered transport (tcp) Oct 2 19:18:19.971254 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:18:19.971321 kernel: QLogic iSCSI HBA Driver Oct 2 19:18:19.992401 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:18:19.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:19.994112 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:18:19.996338 kernel: audit: type=1130 audit(1696274299.993:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:20.036258 kernel: raid6: avx2x4 gen() 30461 MB/s Oct 2 19:18:20.074252 kernel: raid6: avx2x4 xor() 8078 MB/s Oct 2 19:18:20.091263 kernel: raid6: avx2x2 gen() 32027 MB/s Oct 2 19:18:20.108260 kernel: raid6: avx2x2 xor() 13315 MB/s Oct 2 19:18:20.125258 kernel: raid6: avx2x1 gen() 25538 MB/s Oct 2 19:18:20.142267 kernel: raid6: avx2x1 xor() 13525 MB/s Oct 2 19:18:20.159273 kernel: raid6: sse2x4 gen() 13851 MB/s Oct 2 19:18:20.180260 kernel: raid6: sse2x4 xor() 7289 MB/s Oct 2 19:18:20.197282 kernel: raid6: sse2x2 gen() 13081 MB/s Oct 2 19:18:20.214281 kernel: raid6: sse2x2 xor() 8266 MB/s Oct 2 19:18:20.231282 kernel: raid6: sse2x1 gen() 9125 MB/s Oct 2 19:18:20.248309 kernel: raid6: sse2x1 xor() 6736 MB/s Oct 2 19:18:20.248372 kernel: raid6: using algorithm avx2x2 gen() 32027 MB/s Oct 2 19:18:20.248381 kernel: raid6: .... xor() 13315 MB/s, rmw enabled Oct 2 19:18:20.249312 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:18:20.263267 kernel: xor: automatically using best checksumming function avx Oct 2 19:18:20.377281 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:18:20.385080 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:18:20.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:20.388000 audit: BPF prog-id=7 op=LOAD Oct 2 19:18:20.388000 audit: BPF prog-id=8 op=LOAD Oct 2 19:18:20.389035 systemd[1]: Starting systemd-udevd.service... Oct 2 19:18:20.390079 kernel: audit: type=1130 audit(1696274300.386:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:20.402136 systemd-udevd[399]: Using default interface naming scheme 'v252'. Oct 2 19:18:20.406179 systemd[1]: Started systemd-udevd.service. Oct 2 19:18:20.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:20.408506 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:18:20.420796 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Oct 2 19:18:20.442533 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:18:20.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:20.444279 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:18:20.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:20.491008 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:18:20.524262 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:18:20.529296 kernel: libata version 3.00 loaded. Oct 2 19:18:20.533260 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 19:18:20.544292 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:18:20.545264 kernel: scsi host0: ata_piix Oct 2 19:18:20.546251 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:18:20.546285 kernel: AES CTR mode by8 optimization enabled Oct 2 19:18:20.548257 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:18:20.550244 kernel: scsi host1: ata_piix Oct 2 19:18:20.551270 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Oct 2 19:18:20.551296 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Oct 2 19:18:20.565387 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:18:20.579394 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Oct 2 19:18:20.587878 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:18:20.593594 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:18:20.594397 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:18:20.599333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:18:20.601040 systemd[1]: Starting disk-uuid.service... Oct 2 19:18:20.611250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:18:20.641272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:18:20.644248 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:18:20.710247 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 2 19:18:20.710298 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 2 19:18:20.739255 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 2 19:18:20.739426 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:18:20.756251 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:18:21.644702 disk-uuid[515]: The operation has completed successfully. Oct 2 19:18:21.645786 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:18:21.663867 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:18:21.663943 systemd[1]: Finished disk-uuid.service. Oct 2 19:18:21.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.675732 systemd[1]: Starting verity-setup.service... Oct 2 19:18:21.687252 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:18:21.717365 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:18:21.719332 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:18:21.722139 systemd[1]: Finished verity-setup.service. Oct 2 19:18:21.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.789242 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:18:21.789261 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:18:21.789459 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:18:21.790201 systemd[1]: Starting ignition-setup.service... Oct 2 19:18:21.793039 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:18:21.801069 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:18:21.801103 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:18:21.801120 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:18:21.808368 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:18:21.815365 systemd[1]: Finished ignition-setup.service. Oct 2 19:18:21.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.816733 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:18:21.860596 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:18:21.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.862000 audit: BPF prog-id=9 op=LOAD Oct 2 19:18:21.862835 systemd[1]: Starting systemd-networkd.service... Oct 2 19:18:21.881467 systemd-networkd[694]: lo: Link UP Oct 2 19:18:21.881475 systemd-networkd[694]: lo: Gained carrier Oct 2 19:18:21.881874 systemd-networkd[694]: Enumeration completed Oct 2 19:18:21.882105 systemd-networkd[694]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:18:21.882366 systemd[1]: Started systemd-networkd.service. Oct 2 19:18:21.883162 systemd-networkd[694]: eth0: Link UP Oct 2 19:18:21.883166 systemd-networkd[694]: eth0: Gained carrier Oct 2 19:18:21.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.886786 systemd[1]: Reached target network.target. Oct 2 19:18:21.888464 systemd[1]: Starting iscsiuio.service... Oct 2 19:18:21.892409 systemd[1]: Started iscsiuio.service. Oct 2 19:18:21.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.893753 systemd[1]: Starting iscsid.service... Oct 2 19:18:21.896438 iscsid[704]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:18:21.896438 iscsid[704]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:18:21.896438 iscsid[704]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:18:21.896438 iscsid[704]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:18:21.896438 iscsid[704]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:18:21.896438 iscsid[704]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:18:21.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.897624 systemd[1]: Started iscsid.service. Oct 2 19:18:21.903009 systemd-networkd[694]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:18:21.905081 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:18:21.916046 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:18:21.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.916718 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:18:21.917652 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:18:21.918196 systemd[1]: Reached target remote-fs.target. Oct 2 19:18:21.919996 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:18:21.927673 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:18:21.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.949034 ignition[614]: Ignition 2.14.0 Oct 2 19:18:21.949045 ignition[614]: Stage: fetch-offline Oct 2 19:18:21.949094 ignition[614]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:18:21.949101 ignition[614]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:18:21.949203 ignition[614]: parsed url from cmdline: "" Oct 2 19:18:21.949206 ignition[614]: no config URL provided Oct 2 19:18:21.949210 ignition[614]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:18:21.949216 ignition[614]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:18:21.949247 ignition[614]: op(1): [started] loading QEMU firmware config module Oct 2 19:18:21.949253 ignition[614]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:18:21.958217 ignition[614]: op(1): [finished] loading QEMU firmware config module Oct 2 19:18:21.968117 ignition[614]: parsing config with SHA512: 8c97e3e5f559d1095c44872edf686dd0d4b6cbddbc2675c21835c9b54cf129578f508ffa7cd5ae22ded06160737b6aee98a56a7f5051605211ad752b9cee7fd6 Oct 2 19:18:21.990308 unknown[614]: fetched base config from "system" Oct 2 19:18:21.990323 unknown[614]: fetched user config from "qemu" Oct 2 19:18:21.990812 ignition[614]: fetch-offline: fetch-offline passed Oct 2 19:18:21.990875 ignition[614]: Ignition finished successfully Oct 2 19:18:21.993861 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:18:21.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:21.994024 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:18:21.995169 systemd[1]: Starting ignition-kargs.service... Oct 2 19:18:22.004717 ignition[719]: Ignition 2.14.0 Oct 2 19:18:22.004728 ignition[719]: Stage: kargs Oct 2 19:18:22.004832 ignition[719]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:18:22.004844 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:18:22.007953 ignition[719]: kargs: kargs passed Oct 2 19:18:22.008001 ignition[719]: Ignition finished successfully Oct 2 19:18:22.009652 systemd[1]: Finished ignition-kargs.service. Oct 2 19:18:22.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:22.010922 systemd[1]: Starting ignition-disks.service... Oct 2 19:18:22.018063 ignition[725]: Ignition 2.14.0 Oct 2 19:18:22.018071 ignition[725]: Stage: disks Oct 2 19:18:22.018161 ignition[725]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:18:22.018168 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:18:22.020109 systemd[1]: Finished ignition-disks.service. Oct 2 19:18:22.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:22.019092 ignition[725]: disks: disks passed Oct 2 19:18:22.021334 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:18:22.019127 ignition[725]: Ignition finished successfully Oct 2 19:18:22.022300 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:18:22.022823 systemd[1]: Reached target local-fs.target. Oct 2 19:18:22.023744 systemd[1]: Reached target sysinit.target. Oct 2 19:18:22.024287 systemd[1]: Reached target basic.target. Oct 2 19:18:22.026144 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:18:22.036383 systemd-fsck[733]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:18:22.041147 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:18:22.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:22.043078 systemd[1]: Mounting sysroot.mount... Oct 2 19:18:22.050256 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:18:22.050568 systemd[1]: Mounted sysroot.mount. Oct 2 19:18:22.051100 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:18:22.053080 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:18:22.053517 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:18:22.053577 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:18:22.053608 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:18:22.056405 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:18:22.057773 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:18:22.061928 initrd-setup-root[743]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:18:22.065710 initrd-setup-root[751]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:18:22.068480 initrd-setup-root[759]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:18:22.071307 initrd-setup-root[767]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:18:22.098388 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:18:22.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:22.099754 systemd[1]: Starting ignition-mount.service... Oct 2 19:18:22.101197 systemd[1]: Starting sysroot-boot.service... Oct 2 19:18:22.105523 bash[784]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:18:22.112308 ignition[785]: INFO : Ignition 2.14.0 Oct 2 19:18:22.112308 ignition[785]: INFO : Stage: mount Oct 2 19:18:22.113860 ignition[785]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:18:22.113860 ignition[785]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:18:22.113860 ignition[785]: INFO : mount: mount passed Oct 2 19:18:22.113860 ignition[785]: INFO : Ignition finished successfully Oct 2 19:18:22.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:22.113779 systemd[1]: Finished ignition-mount.service. Oct 2 19:18:22.125618 systemd[1]: Finished sysroot-boot.service. Oct 2 19:18:22.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:22.732023 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:18:22.739261 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (794) Oct 2 19:18:22.740660 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:18:22.740680 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:18:22.740690 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:18:22.744021 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:18:22.744675 systemd[1]: Starting ignition-files.service... Oct 2 19:18:22.758980 ignition[814]: INFO : Ignition 2.14.0 Oct 2 19:18:22.758980 ignition[814]: INFO : Stage: files Oct 2 19:18:22.760145 ignition[814]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:18:22.760145 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:18:22.763314 ignition[814]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:18:22.764736 ignition[814]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:18:22.764736 ignition[814]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:18:22.767344 ignition[814]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:18:22.768271 ignition[814]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:18:22.769755 unknown[814]: wrote ssh authorized keys file for user: core Oct 2 19:18:22.770654 ignition[814]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:18:22.772006 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:18:22.773638 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Oct 2 19:18:23.128936 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:18:23.394858 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Oct 2 19:18:23.394858 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:18:23.397906 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:18:23.397906 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Oct 2 19:18:23.443338 systemd-networkd[694]: eth0: Gained IPv6LL Oct 2 19:18:23.505527 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:18:23.573009 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Oct 2 19:18:23.574932 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:18:23.574932 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:18:23.574932 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:18:23.676110 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:18:24.420137 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Oct 2 19:18:24.438306 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:18:24.438306 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:18:24.438306 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:18:24.495305 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:18:26.865060 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Oct 2 19:18:26.867761 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:18:26.867761 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:18:26.867761 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:18:26.867761 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:18:26.867761 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:18:26.867761 ignition[814]: INFO : files: op(f): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:18:26.889985 ignition[814]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:18:26.889985 ignition[814]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:18:26.889985 ignition[814]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:18:26.910002 ignition[814]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:18:26.911208 ignition[814]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:18:26.911208 ignition[814]: INFO : files: op(12): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:18:26.911208 ignition[814]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:18:26.914005 ignition[814]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:18:26.915242 ignition[814]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:18:26.916356 ignition[814]: INFO : files: files passed Oct 2 19:18:26.916994 ignition[814]: INFO : Ignition finished successfully Oct 2 19:18:26.917951 systemd[1]: Finished ignition-files.service. Oct 2 19:18:26.921475 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 19:18:26.921493 kernel: audit: type=1130 audit(1696274306.918:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.921515 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:18:26.921599 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:18:26.922080 systemd[1]: Starting ignition-quench.service... Oct 2 19:18:26.924563 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:18:26.930311 kernel: audit: type=1130 audit(1696274306.925:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.930336 kernel: audit: type=1131 audit(1696274306.925:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.924630 systemd[1]: Finished ignition-quench.service. Oct 2 19:18:26.934292 initrd-setup-root-after-ignition[839]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:18:26.936757 initrd-setup-root-after-ignition[841]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:18:26.938253 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:18:26.941728 kernel: audit: type=1130 audit(1696274306.938:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.938373 systemd[1]: Reached target ignition-complete.target. Oct 2 19:18:26.942771 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:18:26.953631 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:18:26.953700 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:18:26.959323 kernel: audit: type=1130 audit(1696274306.954:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.959344 kernel: audit: type=1131 audit(1696274306.954:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.954908 systemd[1]: Reached target initrd-fs.target. Oct 2 19:18:26.959884 systemd[1]: Reached target initrd.target. Oct 2 19:18:26.960912 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:18:26.962323 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:18:26.973685 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:18:26.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.975938 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:18:26.977852 kernel: audit: type=1130 audit(1696274306.974:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.984806 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:18:26.985633 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:18:26.986922 systemd[1]: Stopped target timers.target. Oct 2 19:18:26.988294 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:18:26.992257 kernel: audit: type=1131 audit(1696274306.989:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:26.988430 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:18:26.989685 systemd[1]: Stopped target initrd.target. Oct 2 19:18:26.992875 systemd[1]: Stopped target basic.target. Oct 2 19:18:26.993596 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:18:26.995431 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:18:26.996495 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:18:26.997126 systemd[1]: Stopped target remote-fs.target. Oct 2 19:18:26.997472 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:18:26.997681 systemd[1]: Stopped target sysinit.target. Oct 2 19:18:26.997921 systemd[1]: Stopped target local-fs.target. Oct 2 19:18:27.000943 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:18:27.001843 systemd[1]: Stopped target swap.target. Oct 2 19:18:27.004353 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:18:27.004515 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:18:27.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.008245 kernel: audit: type=1131 audit(1696274307.004:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.005262 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:18:27.008762 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:18:27.012319 kernel: audit: type=1131 audit(1696274307.009:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.008846 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:18:27.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.009512 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:18:27.009591 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:18:27.013030 systemd[1]: Stopped target paths.target. Oct 2 19:18:27.013956 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:18:27.018333 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:18:27.018576 systemd[1]: Stopped target slices.target. Oct 2 19:18:27.020243 systemd[1]: Stopped target sockets.target. Oct 2 19:18:27.021346 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:18:27.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.021517 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:18:27.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.022616 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:18:27.022730 systemd[1]: Stopped ignition-files.service. Oct 2 19:18:27.026554 iscsid[704]: iscsid shutting down. Oct 2 19:18:27.024707 systemd[1]: Stopping ignition-mount.service... Oct 2 19:18:27.025967 systemd[1]: Stopping iscsid.service... Oct 2 19:18:27.029788 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:18:27.031042 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:18:27.031988 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:18:27.033634 ignition[854]: INFO : Ignition 2.14.0 Oct 2 19:18:27.033634 ignition[854]: INFO : Stage: umount Oct 2 19:18:27.033634 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:18:27.033634 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:18:27.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.036974 ignition[854]: INFO : umount: umount passed Oct 2 19:18:27.036974 ignition[854]: INFO : Ignition finished successfully Oct 2 19:18:27.033761 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:18:27.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.034857 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:18:27.040804 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:18:27.041419 systemd[1]: Stopped iscsid.service. Oct 2 19:18:27.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.042840 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:18:27.043605 systemd[1]: Stopped ignition-mount.service. Oct 2 19:18:27.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.045796 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:18:27.046778 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:18:27.047339 systemd[1]: Closed iscsid.socket. Oct 2 19:18:27.048188 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:18:27.048782 systemd[1]: Stopped ignition-disks.service. Oct 2 19:18:27.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.049817 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:18:27.049847 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:18:27.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.051343 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:18:27.051373 systemd[1]: Stopped ignition-setup.service. Oct 2 19:18:27.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.052947 systemd[1]: Stopping iscsiuio.service... Oct 2 19:18:27.053996 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:18:27.054632 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:18:27.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.055980 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:18:27.056567 systemd[1]: Stopped iscsiuio.service. Oct 2 19:18:27.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.057605 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:18:27.058179 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:18:27.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.059800 systemd[1]: Stopped target network.target. Oct 2 19:18:27.060754 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:18:27.060778 systemd[1]: Closed iscsiuio.socket. Oct 2 19:18:27.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.061748 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:18:27.061777 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:18:27.062810 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:18:27.064850 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:18:27.074688 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:18:27.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.074791 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:18:27.076324 systemd-networkd[694]: eth0: DHCPv6 lease lost Oct 2 19:18:27.078204 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:18:27.078322 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:18:27.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.079133 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:18:27.079157 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:18:27.080000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:18:27.082234 systemd[1]: Stopping network-cleanup.service... Oct 2 19:18:27.083218 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:18:27.083969 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:18:27.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.085142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:18:27.085175 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:18:27.086000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:18:27.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.086785 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:18:27.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.086820 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:18:27.088141 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:18:27.090421 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:18:27.092816 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:18:27.093454 systemd[1]: Stopped network-cleanup.service. Oct 2 19:18:27.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.094636 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:18:27.095281 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:18:27.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.096575 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:18:27.096608 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:18:27.098171 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:18:27.098197 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:18:27.099750 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:18:27.099784 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:18:27.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.101336 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:18:27.101364 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:18:27.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.102978 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:18:27.103008 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:18:27.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.105016 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:18:27.106109 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:18:27.106146 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:18:27.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.108018 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:18:27.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.108048 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:18:27.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.109216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:18:27.109256 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:18:27.112033 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:18:27.113208 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:18:27.113891 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:18:27.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:27.115047 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:18:27.116607 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:18:27.136776 systemd[1]: Switching root. Oct 2 19:18:27.154405 systemd-journald[198]: Journal stopped Oct 2 19:18:30.556433 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). Oct 2 19:18:30.556489 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:18:30.556503 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:18:30.556514 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:18:30.556525 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:18:30.556540 kernel: SELinux: policy capability open_perms=1 Oct 2 19:18:30.556551 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:18:30.556567 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:18:30.556578 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:18:30.556588 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:18:30.556599 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:18:30.556614 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:18:30.556625 systemd[1]: Successfully loaded SELinux policy in 36.664ms. Oct 2 19:18:30.556649 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.181ms. Oct 2 19:18:30.556665 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:18:30.556678 systemd[1]: Detected virtualization kvm. Oct 2 19:18:30.556690 systemd[1]: Detected architecture x86-64. Oct 2 19:18:30.556702 systemd[1]: Detected first boot. Oct 2 19:18:30.556714 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:18:30.556726 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:18:30.556740 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:18:30.556753 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:18:30.556766 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:18:30.556779 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:18:30.556790 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:18:30.556803 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:18:30.556815 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:18:30.556827 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:18:30.556840 systemd[1]: Created slice system-getty.slice. Oct 2 19:18:30.556852 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:18:30.556864 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:18:30.556876 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:18:30.556887 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:18:30.556899 systemd[1]: Created slice user.slice. Oct 2 19:18:30.556911 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:18:30.556924 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:18:30.556936 systemd[1]: Set up automount boot.automount. Oct 2 19:18:30.556949 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:18:30.556960 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:18:30.556973 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:18:30.556989 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:18:30.557001 systemd[1]: Reached target integritysetup.target. Oct 2 19:18:30.557013 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:18:30.557025 systemd[1]: Reached target remote-fs.target. Oct 2 19:18:30.557038 systemd[1]: Reached target slices.target. Oct 2 19:18:30.557051 systemd[1]: Reached target swap.target. Oct 2 19:18:30.557062 systemd[1]: Reached target torcx.target. Oct 2 19:18:30.557074 systemd[1]: Reached target veritysetup.target. Oct 2 19:18:30.557086 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:18:30.557100 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:18:30.557112 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:18:30.557124 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:18:30.557136 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:18:30.557148 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:18:30.557161 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:18:30.557173 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:18:30.557186 systemd[1]: Mounting media.mount... Oct 2 19:18:30.557198 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:18:30.557210 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:18:30.557222 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:18:30.557254 systemd[1]: Mounting tmp.mount... Oct 2 19:18:30.557266 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:18:30.557277 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:18:30.557297 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:18:30.557309 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:18:30.557321 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:18:30.557333 systemd[1]: Starting modprobe@drm.service... Oct 2 19:18:30.557347 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:18:30.557362 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:18:30.557378 systemd[1]: Starting modprobe@loop.service... Oct 2 19:18:30.557393 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:18:30.557408 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:18:30.557425 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:18:30.557440 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:18:30.557455 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:18:30.557469 kernel: loop: module loaded Oct 2 19:18:30.557484 systemd[1]: Stopped systemd-journald.service. Oct 2 19:18:30.557500 kernel: fuse: init (API version 7.34) Oct 2 19:18:30.557514 systemd[1]: Starting systemd-journald.service... Oct 2 19:18:30.557529 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:18:30.557543 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:18:30.557558 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:18:30.557570 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:18:30.557582 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:18:30.557594 systemd[1]: Stopped verity-setup.service. Oct 2 19:18:30.557606 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:18:30.557619 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:18:30.557631 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:18:30.557643 systemd[1]: Mounted media.mount. Oct 2 19:18:30.557654 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:18:30.557675 systemd-journald[960]: Journal started Oct 2 19:18:30.557716 systemd-journald[960]: Runtime Journal (/run/log/journal/ffdb050014d742c4bf086803fce6219b) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:18:27.222000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:18:28.158000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:18:28.158000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:18:28.158000 audit: BPF prog-id=10 op=LOAD Oct 2 19:18:28.158000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:18:28.158000 audit: BPF prog-id=11 op=LOAD Oct 2 19:18:28.158000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:18:30.430000 audit: BPF prog-id=12 op=LOAD Oct 2 19:18:30.430000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:18:30.430000 audit: BPF prog-id=13 op=LOAD Oct 2 19:18:30.430000 audit: BPF prog-id=14 op=LOAD Oct 2 19:18:30.430000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:18:30.430000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:18:30.431000 audit: BPF prog-id=15 op=LOAD Oct 2 19:18:30.431000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:18:30.431000 audit: BPF prog-id=16 op=LOAD Oct 2 19:18:30.431000 audit: BPF prog-id=17 op=LOAD Oct 2 19:18:30.431000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:18:30.431000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:18:30.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.455000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:18:30.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.533000 audit: BPF prog-id=18 op=LOAD Oct 2 19:18:30.533000 audit: BPF prog-id=19 op=LOAD Oct 2 19:18:30.533000 audit: BPF prog-id=20 op=LOAD Oct 2 19:18:30.533000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:18:30.533000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:18:30.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.555000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:18:30.555000 audit[960]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffea5613300 a2=4000 a3=7ffea561339c items=0 ppid=1 pid=960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:30.555000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:18:28.217291 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:18:30.429542 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:18:28.217523 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:18:30.429553 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:18:28.217539 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:18:30.432667 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:18:28.217566 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:18:28.217575 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:18:28.217600 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:18:28.217611 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:18:28.217786 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:18:28.217814 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:18:28.217824 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:18:28.218097 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:18:28.218126 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:18:30.559660 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:18:30.559677 systemd[1]: Started systemd-journald.service. Oct 2 19:18:28.218140 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:18:28.218152 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:18:28.218166 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:18:28.218178 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:18:30.115946 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:30Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:18:30.116311 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:30Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:18:30.116467 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:30Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:18:30.116719 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:30Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:18:30.116784 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:30Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:18:30.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.116862 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2023-10-02T19:18:30Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:18:30.560612 systemd[1]: Mounted tmp.mount. Oct 2 19:18:30.561428 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:18:30.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.562182 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:18:30.562317 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:18:30.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.563086 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:18:30.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.563757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:18:30.563882 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:18:30.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.564563 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:18:30.564674 systemd[1]: Finished modprobe@drm.service. Oct 2 19:18:30.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.565359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:18:30.565472 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:18:30.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.566181 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:18:30.566377 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:18:30.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.567028 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:18:30.567131 systemd[1]: Finished modprobe@loop.service. Oct 2 19:18:30.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.567870 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:18:30.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.568643 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:18:30.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.569580 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:18:30.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.570512 systemd[1]: Reached target network-pre.target. Oct 2 19:18:30.571989 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:18:30.573382 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:18:30.573897 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:18:30.575463 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:18:30.579443 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:18:30.580090 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:18:30.580841 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:18:30.581432 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:18:30.582205 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:18:30.583974 systemd-journald[960]: Time spent on flushing to /var/log/journal/ffdb050014d742c4bf086803fce6219b is 18.641ms for 1088 entries. Oct 2 19:18:30.583974 systemd-journald[960]: System Journal (/var/log/journal/ffdb050014d742c4bf086803fce6219b) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:18:30.644463 systemd-journald[960]: Received client request to flush runtime journal. Oct 2 19:18:30.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.583651 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:18:30.587203 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:18:30.587909 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:18:30.645825 udevadm[994]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:18:30.597009 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:18:30.612305 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:18:30.614752 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:18:30.615986 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:18:30.616683 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:18:30.622527 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:18:30.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:30.624838 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:18:30.637203 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:18:30.645529 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:18:31.006517 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:18:31.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.007000 audit: BPF prog-id=21 op=LOAD Oct 2 19:18:31.007000 audit: BPF prog-id=22 op=LOAD Oct 2 19:18:31.007000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:18:31.007000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:18:31.008360 systemd[1]: Starting systemd-udevd.service... Oct 2 19:18:31.022266 systemd-udevd[996]: Using default interface naming scheme 'v252'. Oct 2 19:18:31.042303 systemd[1]: Started systemd-udevd.service. Oct 2 19:18:31.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.044000 audit: BPF prog-id=23 op=LOAD Oct 2 19:18:31.045420 systemd[1]: Starting systemd-networkd.service... Oct 2 19:18:31.055000 audit: BPF prog-id=24 op=LOAD Oct 2 19:18:31.055000 audit: BPF prog-id=25 op=LOAD Oct 2 19:18:31.055000 audit: BPF prog-id=26 op=LOAD Oct 2 19:18:31.056508 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:18:31.082409 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:18:31.088058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:18:31.093842 systemd[1]: Started systemd-userdbd.service. Oct 2 19:18:31.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.114259 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:18:31.126255 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:18:31.133000 audit[997]: AVC avc: denied { confidentiality } for pid=997 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:18:31.162068 systemd-networkd[1006]: lo: Link UP Oct 2 19:18:31.162078 systemd-networkd[1006]: lo: Gained carrier Oct 2 19:18:31.162530 systemd-networkd[1006]: Enumeration completed Oct 2 19:18:31.162640 systemd[1]: Started systemd-networkd.service. Oct 2 19:18:31.162651 systemd-networkd[1006]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:18:31.163801 systemd-networkd[1006]: eth0: Link UP Oct 2 19:18:31.163811 systemd-networkd[1006]: eth0: Gained carrier Oct 2 19:18:31.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.133000 audit[997]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c6a9320b30 a1=32194 a2=7f169bec3bc5 a3=5 items=106 ppid=996 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:31.133000 audit: CWD cwd="/" Oct 2 19:18:31.133000 audit: PATH item=0 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=1 name=(null) inode=14739 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=2 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=3 name=(null) inode=14740 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=4 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=5 name=(null) inode=14741 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=6 name=(null) inode=14741 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=7 name=(null) inode=14742 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=8 name=(null) inode=14741 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=9 name=(null) inode=14743 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=10 name=(null) inode=14741 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=11 name=(null) inode=14744 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=12 name=(null) inode=14741 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=13 name=(null) inode=14745 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=14 name=(null) inode=14741 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=15 name=(null) inode=14746 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=16 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=17 name=(null) inode=14747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=18 name=(null) inode=14747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=19 name=(null) inode=14748 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=20 name=(null) inode=14747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=21 name=(null) inode=14749 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=22 name=(null) inode=14747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=23 name=(null) inode=14750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=24 name=(null) inode=14747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=25 name=(null) inode=14751 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=26 name=(null) inode=14747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=27 name=(null) inode=14752 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=28 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=29 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=30 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=31 name=(null) inode=14754 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=32 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=33 name=(null) inode=14755 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=34 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=35 name=(null) inode=14756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=36 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=37 name=(null) inode=14757 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=38 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=39 name=(null) inode=14758 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=40 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=41 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=42 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=43 name=(null) inode=14760 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=44 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=45 name=(null) inode=14761 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=46 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=47 name=(null) inode=14762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=48 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=49 name=(null) inode=14763 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=50 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=51 name=(null) inode=14764 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=52 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=53 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=54 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=55 name=(null) inode=14766 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=56 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=57 name=(null) inode=14767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=58 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=59 name=(null) inode=14768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=60 name=(null) inode=14768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=61 name=(null) inode=14769 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=62 name=(null) inode=14768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=63 name=(null) inode=14770 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=64 name=(null) inode=14768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=65 name=(null) inode=14771 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=66 name=(null) inode=14768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=67 name=(null) inode=14772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=68 name=(null) inode=14768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=69 name=(null) inode=14773 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=70 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=71 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=72 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=73 name=(null) inode=14775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=74 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=75 name=(null) inode=14776 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=76 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=77 name=(null) inode=14777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=78 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=79 name=(null) inode=14778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=80 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=81 name=(null) inode=14779 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=82 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=83 name=(null) inode=14780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=84 name=(null) inode=14780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=85 name=(null) inode=14781 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=86 name=(null) inode=14780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=87 name=(null) inode=14782 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=88 name=(null) inode=14780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=89 name=(null) inode=14783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=90 name=(null) inode=14780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.204245 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 2 19:18:31.133000 audit: PATH item=91 name=(null) inode=14784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=92 name=(null) inode=14780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=93 name=(null) inode=14785 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=94 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=95 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=96 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=97 name=(null) inode=14787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=98 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=99 name=(null) inode=14788 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=100 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=101 name=(null) inode=14789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=102 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=103 name=(null) inode=14790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=104 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PATH item=105 name=(null) inode=14791 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:18:31.133000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:18:31.218247 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:18:31.227474 systemd-networkd[1006]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:18:31.232249 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:18:31.241244 kernel: kvm: Nested Virtualization enabled Oct 2 19:18:31.241320 kernel: SVM: kvm: Nested Paging enabled Oct 2 19:18:31.252243 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:18:31.266525 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:18:31.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.268007 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:18:31.281167 lvm[1031]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:18:31.306023 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:18:31.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.306746 systemd[1]: Reached target cryptsetup.target. Oct 2 19:18:31.308141 systemd[1]: Starting lvm2-activation.service... Oct 2 19:18:31.311016 lvm[1032]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:18:31.338529 systemd[1]: Finished lvm2-activation.service. Oct 2 19:18:31.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.339275 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:18:31.339834 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:18:31.339853 systemd[1]: Reached target local-fs.target. Oct 2 19:18:31.340406 systemd[1]: Reached target machines.target. Oct 2 19:18:31.341933 systemd[1]: Starting ldconfig.service... Oct 2 19:18:31.342660 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:18:31.342699 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:18:31.343554 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:18:31.344852 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:18:31.346898 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:18:31.347708 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:18:31.347747 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:18:31.348585 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:18:31.351195 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1034 (bootctl) Oct 2 19:18:31.352441 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:18:31.353984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:18:31.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.363953 systemd-tmpfiles[1037]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:18:31.364860 systemd-tmpfiles[1037]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:18:31.366347 systemd-tmpfiles[1037]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:18:31.388264 systemd-fsck[1042]: fsck.fat 4.2 (2021-01-31) Oct 2 19:18:31.388264 systemd-fsck[1042]: /dev/vda1: 789 files, 115069/258078 clusters Oct 2 19:18:31.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.389575 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:18:31.392475 systemd[1]: Mounting boot.mount... Oct 2 19:18:31.548623 systemd[1]: Mounted boot.mount. Oct 2 19:18:31.561652 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:18:31.562291 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:18:31.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.563607 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:18:31.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.616637 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:18:31.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.618594 systemd[1]: Starting audit-rules.service... Oct 2 19:18:31.620035 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:18:31.622000 audit: BPF prog-id=27 op=LOAD Oct 2 19:18:31.621485 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:18:31.623882 systemd[1]: Starting systemd-resolved.service... Oct 2 19:18:31.628000 audit: BPF prog-id=28 op=LOAD Oct 2 19:18:31.628856 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:18:31.630389 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:18:31.631475 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:18:31.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.632464 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:18:31.633000 audit[1058]: SYSTEM_BOOT pid=1058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.637201 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:18:31.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.645983 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:18:31.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:31.657605 augenrules[1065]: No rules Oct 2 19:18:31.657000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:18:31.657000 audit[1065]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe4b23d0c0 a2=420 a3=0 items=0 ppid=1045 pid=1065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:31.657000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:18:31.658275 systemd[1]: Finished audit-rules.service. Oct 2 19:18:31.673893 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:18:31.674695 systemd[1]: Reached target time-set.target. Oct 2 19:18:32.402662 systemd-timesyncd[1056]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:18:32.402696 systemd-timesyncd[1056]: Initial clock synchronization to Mon 2023-10-02 19:18:32.402608 UTC. Oct 2 19:18:32.411700 systemd-resolved[1049]: Positive Trust Anchors: Oct 2 19:18:32.411709 systemd-resolved[1049]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:18:32.411734 systemd-resolved[1049]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:18:32.426162 systemd-resolved[1049]: Defaulting to hostname 'linux'. Oct 2 19:18:32.427670 systemd[1]: Started systemd-resolved.service. Oct 2 19:18:32.428322 systemd[1]: Reached target network.target. Oct 2 19:18:32.428811 systemd[1]: Reached target nss-lookup.target. Oct 2 19:18:32.459364 ldconfig[1033]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:18:32.465381 systemd[1]: Finished ldconfig.service. Oct 2 19:18:32.467203 systemd[1]: Starting systemd-update-done.service... Oct 2 19:18:32.472309 systemd[1]: Finished systemd-update-done.service. Oct 2 19:18:32.473010 systemd[1]: Reached target sysinit.target. Oct 2 19:18:32.473625 systemd[1]: Started motdgen.path. Oct 2 19:18:32.474117 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:18:32.475088 systemd[1]: Started logrotate.timer. Oct 2 19:18:32.475679 systemd[1]: Started mdadm.timer. Oct 2 19:18:32.476152 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:18:32.476719 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:18:32.476758 systemd[1]: Reached target paths.target. Oct 2 19:18:32.477265 systemd[1]: Reached target timers.target. Oct 2 19:18:32.478240 systemd[1]: Listening on dbus.socket. Oct 2 19:18:32.480008 systemd[1]: Starting docker.socket... Oct 2 19:18:32.482531 systemd[1]: Listening on sshd.socket. Oct 2 19:18:32.483107 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:18:32.483433 systemd[1]: Listening on docker.socket. Oct 2 19:18:32.484044 systemd[1]: Reached target sockets.target. Oct 2 19:18:32.484549 systemd[1]: Reached target basic.target. Oct 2 19:18:32.485082 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:18:32.485111 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:18:32.485900 systemd[1]: Starting containerd.service... Oct 2 19:18:32.487227 systemd[1]: Starting dbus.service... Oct 2 19:18:32.488424 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:18:32.489977 systemd[1]: Starting extend-filesystems.service... Oct 2 19:18:32.490679 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:18:32.491631 systemd[1]: Starting motdgen.service... Oct 2 19:18:32.495088 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:18:32.496510 systemd[1]: Starting prepare-critools.service... Oct 2 19:18:32.497898 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:18:32.499300 jq[1076]: false Oct 2 19:18:32.499511 systemd[1]: Starting sshd-keygen.service... Oct 2 19:18:32.502092 systemd[1]: Starting systemd-logind.service... Oct 2 19:18:32.502646 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:18:32.502684 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:18:32.505794 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:18:32.506351 systemd[1]: Starting update-engine.service... Oct 2 19:18:32.507762 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:18:32.510389 jq[1095]: true Oct 2 19:18:32.509683 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:18:32.509817 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:18:32.510071 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:18:32.510189 systemd[1]: Finished motdgen.service. Oct 2 19:18:32.512065 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:18:32.512191 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:18:32.519602 jq[1100]: true Oct 2 19:18:32.522856 tar[1098]: ./ Oct 2 19:18:32.522856 tar[1098]: ./loopback Oct 2 19:18:32.522654 systemd[1]: Started dbus.service. Oct 2 19:18:32.522532 dbus-daemon[1075]: [system] SELinux support is enabled Oct 2 19:18:32.525034 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:18:32.525055 systemd[1]: Reached target system-config.target. Oct 2 19:18:32.525679 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:18:32.526146 extend-filesystems[1077]: Found sr0 Oct 2 19:18:32.526146 extend-filesystems[1077]: Found vda Oct 2 19:18:32.526146 extend-filesystems[1077]: Found vda1 Oct 2 19:18:32.526146 extend-filesystems[1077]: Found vda2 Oct 2 19:18:32.526146 extend-filesystems[1077]: Found vda3 Oct 2 19:18:32.526146 extend-filesystems[1077]: Found usr Oct 2 19:18:32.526146 extend-filesystems[1077]: Found vda4 Oct 2 19:18:32.526146 extend-filesystems[1077]: Found vda6 Oct 2 19:18:32.526146 extend-filesystems[1077]: Found vda7 Oct 2 19:18:32.526146 extend-filesystems[1077]: Found vda9 Oct 2 19:18:32.526146 extend-filesystems[1077]: Checking size of /dev/vda9 Oct 2 19:18:32.525693 systemd[1]: Reached target user-config.target. Oct 2 19:18:32.532895 tar[1099]: crictl Oct 2 19:18:32.538425 extend-filesystems[1077]: Old size kept for /dev/vda9 Oct 2 19:18:32.539504 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:18:32.542598 systemd[1]: Finished extend-filesystems.service. Oct 2 19:18:32.570173 env[1101]: time="2023-10-02T19:18:32.570052242Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:18:32.576133 tar[1098]: ./bandwidth Oct 2 19:18:32.592371 env[1101]: time="2023-10-02T19:18:32.592343506Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:18:32.592471 env[1101]: time="2023-10-02T19:18:32.592448853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:18:32.597887 env[1101]: time="2023-10-02T19:18:32.597857827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:18:32.597887 env[1101]: time="2023-10-02T19:18:32.597885409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:18:32.598082 env[1101]: time="2023-10-02T19:18:32.598060768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:18:32.598082 env[1101]: time="2023-10-02T19:18:32.598077820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:18:32.598149 env[1101]: time="2023-10-02T19:18:32.598088369Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:18:32.598149 env[1101]: time="2023-10-02T19:18:32.598096405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:18:32.598184 env[1101]: time="2023-10-02T19:18:32.598156076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:18:32.598343 env[1101]: time="2023-10-02T19:18:32.598323400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:18:32.598451 env[1101]: time="2023-10-02T19:18:32.598427235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:18:32.598451 env[1101]: time="2023-10-02T19:18:32.598445810Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:18:32.598525 env[1101]: time="2023-10-02T19:18:32.598490273Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:18:32.598525 env[1101]: time="2023-10-02T19:18:32.598501304Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:18:32.607270 tar[1098]: ./ptp Oct 2 19:18:32.641655 tar[1098]: ./vlan Oct 2 19:18:32.673969 update_engine[1094]: I1002 19:18:32.672930 1094 main.cc:92] Flatcar Update Engine starting Oct 2 19:18:32.676641 systemd[1]: Started update-engine.service. Oct 2 19:18:32.676803 update_engine[1094]: I1002 19:18:32.676661 1094 update_check_scheduler.cc:74] Next update check in 2m32s Oct 2 19:18:32.678650 tar[1098]: ./host-device Oct 2 19:18:32.678664 systemd[1]: Started locksmithd.service. Oct 2 19:18:32.681692 systemd-logind[1090]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:18:32.681887 systemd-logind[1090]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:18:32.683348 systemd-logind[1090]: New seat seat0. Oct 2 19:18:32.689174 systemd[1]: Started systemd-logind.service. Oct 2 19:18:32.710139 tar[1098]: ./tuning Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736574801Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736623703Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736637328Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736681812Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736696950Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736710726Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736761621Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736779064Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736793561Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736807638Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736826092Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736839177Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.736950516Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:18:32.739406 env[1101]: time="2023-10-02T19:18:32.737034293Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:18:32.739774 bash[1127]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:18:32.739924 tar[1098]: ./vrf Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737344675Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737378148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737391102Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737446125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737458999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737471182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737482203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737553997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737566861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737578884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737589614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737602298Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737709068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737723716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.739971 env[1101]: time="2023-10-02T19:18:32.737737051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.740269 env[1101]: time="2023-10-02T19:18:32.737749855Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:18:32.740269 env[1101]: time="2023-10-02T19:18:32.737766436Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:18:32.740269 env[1101]: time="2023-10-02T19:18:32.737777577Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:18:32.740269 env[1101]: time="2023-10-02T19:18:32.737796302Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:18:32.740269 env[1101]: time="2023-10-02T19:18:32.737834774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.738058864Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.738126301Z" level=info msg="Connect containerd service" Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.738153492Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.738634854Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.738734611Z" level=info msg="Start subscribing containerd event" Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.738769857Z" level=info msg="Start recovering state" Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.738814611Z" level=info msg="Start event monitor" Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.738823819Z" level=info msg="Start snapshots syncer" Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.738832796Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.738840490Z" level=info msg="Start streaming server" Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.739138308Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:18:32.740361 env[1101]: time="2023-10-02T19:18:32.739166071Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:18:32.743339 env[1101]: time="2023-10-02T19:18:32.740624686Z" level=info msg="containerd successfully booted in 0.172379s" Oct 2 19:18:32.740534 systemd[1]: Started containerd.service. Oct 2 19:18:32.741870 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:18:32.758821 locksmithd[1133]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:18:32.773094 tar[1098]: ./sbr Oct 2 19:18:32.804146 tar[1098]: ./tap Oct 2 19:18:32.819988 sshd_keygen[1097]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:18:32.836694 systemd[1]: Finished sshd-keygen.service. Oct 2 19:18:32.837688 tar[1098]: ./dhcp Oct 2 19:18:32.838564 systemd[1]: Starting issuegen.service... Oct 2 19:18:32.843194 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:18:32.843300 systemd[1]: Finished issuegen.service. Oct 2 19:18:32.844796 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:18:32.849883 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:18:32.851599 systemd[1]: Started getty@tty1.service. Oct 2 19:18:32.852979 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:18:32.853668 systemd[1]: Reached target getty.target. Oct 2 19:18:32.914205 tar[1098]: ./static Oct 2 19:18:32.936551 tar[1098]: ./firewall Oct 2 19:18:32.960676 systemd[1]: Finished prepare-critools.service. Oct 2 19:18:32.973047 tar[1098]: ./macvlan Oct 2 19:18:33.002999 tar[1098]: ./dummy Oct 2 19:18:33.033770 tar[1098]: ./bridge Oct 2 19:18:33.065952 tar[1098]: ./ipvlan Oct 2 19:18:33.095476 tar[1098]: ./portmap Oct 2 19:18:33.123662 tar[1098]: ./host-local Oct 2 19:18:33.131150 systemd-networkd[1006]: eth0: Gained IPv6LL Oct 2 19:18:33.155973 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:18:33.156831 systemd[1]: Reached target multi-user.target. Oct 2 19:18:33.158432 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:18:33.163825 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:18:33.163960 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:18:33.164718 systemd[1]: Startup finished in 531ms (kernel) + 7.493s (initrd) + 5.257s (userspace) = 13.282s. Oct 2 19:18:37.532068 systemd[1]: Created slice system-sshd.slice. Oct 2 19:18:37.533106 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:46202.service. Oct 2 19:18:37.575489 sshd[1158]: Accepted publickey for core from 10.0.0.1 port 46202 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:18:37.576945 sshd[1158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:37.585664 systemd-logind[1090]: New session 1 of user core. Oct 2 19:18:37.586490 systemd[1]: Created slice user-500.slice. Oct 2 19:18:37.587470 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:18:37.596042 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:18:37.597340 systemd[1]: Starting user@500.service... Oct 2 19:18:37.599730 (systemd)[1161]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:37.667209 systemd[1161]: Queued start job for default target default.target. Oct 2 19:18:37.667611 systemd[1161]: Reached target paths.target. Oct 2 19:18:37.667628 systemd[1161]: Reached target sockets.target. Oct 2 19:18:37.667638 systemd[1161]: Reached target timers.target. Oct 2 19:18:37.667648 systemd[1161]: Reached target basic.target. Oct 2 19:18:37.667679 systemd[1161]: Reached target default.target. Oct 2 19:18:37.667700 systemd[1161]: Startup finished in 62ms. Oct 2 19:18:37.667870 systemd[1]: Started user@500.service. Oct 2 19:18:37.669149 systemd[1]: Started session-1.scope. Oct 2 19:18:37.719787 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:46204.service. Oct 2 19:18:37.756028 sshd[1170]: Accepted publickey for core from 10.0.0.1 port 46204 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:18:37.757114 sshd[1170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:37.760670 systemd-logind[1090]: New session 2 of user core. Oct 2 19:18:37.761659 systemd[1]: Started session-2.scope. Oct 2 19:18:37.814342 sshd[1170]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:37.817303 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:46214.service. Oct 2 19:18:37.817685 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:46204.service: Deactivated successfully. Oct 2 19:18:37.818228 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:18:37.818855 systemd-logind[1090]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:18:37.819670 systemd-logind[1090]: Removed session 2. Oct 2 19:18:37.856487 sshd[1175]: Accepted publickey for core from 10.0.0.1 port 46214 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:18:37.857634 sshd[1175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:37.860992 systemd-logind[1090]: New session 3 of user core. Oct 2 19:18:37.861748 systemd[1]: Started session-3.scope. Oct 2 19:18:37.911098 sshd[1175]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:37.913568 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:46214.service: Deactivated successfully. Oct 2 19:18:37.914044 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:18:37.914592 systemd-logind[1090]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:18:37.915507 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:46226.service. Oct 2 19:18:37.916194 systemd-logind[1090]: Removed session 3. Oct 2 19:18:37.950792 sshd[1183]: Accepted publickey for core from 10.0.0.1 port 46226 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:18:37.951886 sshd[1183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:37.954866 systemd-logind[1090]: New session 4 of user core. Oct 2 19:18:37.955525 systemd[1]: Started session-4.scope. Oct 2 19:18:38.008968 sshd[1183]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:38.011251 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:46226.service: Deactivated successfully. Oct 2 19:18:38.011689 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:18:38.012307 systemd-logind[1090]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:18:38.013138 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:46238.service. Oct 2 19:18:38.013708 systemd-logind[1090]: Removed session 4. Oct 2 19:18:38.047693 sshd[1189]: Accepted publickey for core from 10.0.0.1 port 46238 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:18:38.048750 sshd[1189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:38.052095 systemd-logind[1090]: New session 5 of user core. Oct 2 19:18:38.053029 systemd[1]: Started session-5.scope. Oct 2 19:18:38.113387 sudo[1192]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:18:38.113551 sudo[1192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:18:38.125191 dbus-daemon[1075]: \xd0\u001d\u001b\xa0\xc3U: received setenforce notice (enforcing=980085616) Oct 2 19:18:38.127872 sudo[1192]: pam_unix(sudo:session): session closed for user root Oct 2 19:18:38.130434 sshd[1189]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:38.133627 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:46238.service: Deactivated successfully. Oct 2 19:18:38.134298 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:18:38.134875 systemd-logind[1090]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:18:38.136042 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:46240.service. Oct 2 19:18:38.136721 systemd-logind[1090]: Removed session 5. Oct 2 19:18:38.171388 sshd[1196]: Accepted publickey for core from 10.0.0.1 port 46240 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:18:38.172543 sshd[1196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:38.176332 systemd-logind[1090]: New session 6 of user core. Oct 2 19:18:38.177367 systemd[1]: Started session-6.scope. Oct 2 19:18:38.228728 sudo[1200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:18:38.228903 sudo[1200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:18:38.231157 sudo[1200]: pam_unix(sudo:session): session closed for user root Oct 2 19:18:38.235421 sudo[1199]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:18:38.235580 sudo[1199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:18:38.244104 systemd[1]: Stopping audit-rules.service... Oct 2 19:18:38.243000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:18:38.245024 auditctl[1203]: No rules Oct 2 19:18:38.245344 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:18:38.245501 systemd[1]: Stopped audit-rules.service. Oct 2 19:18:38.245648 kernel: kauditd_printk_skb: 231 callbacks suppressed Oct 2 19:18:38.245701 kernel: audit: type=1305 audit(1696274318.243:162): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:18:38.243000 audit[1203]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff85d15ab0 a2=420 a3=0 items=0 ppid=1 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:38.246760 systemd[1]: Starting audit-rules.service... Oct 2 19:18:38.252958 kernel: audit: type=1300 audit(1696274318.243:162): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff85d15ab0 a2=420 a3=0 items=0 ppid=1 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:38.253007 kernel: audit: type=1327 audit(1696274318.243:162): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:18:38.253021 kernel: audit: type=1131 audit(1696274318.243:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.243000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:18:38.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.261708 augenrules[1220]: No rules Oct 2 19:18:38.262410 systemd[1]: Finished audit-rules.service. Oct 2 19:18:38.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.263173 sudo[1199]: pam_unix(sudo:session): session closed for user root Oct 2 19:18:38.264461 sshd[1196]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:38.261000 audit[1199]: USER_END pid=1199 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.267180 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:46248.service. Oct 2 19:18:38.267476 kernel: audit: type=1130 audit(1696274318.261:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.267513 kernel: audit: type=1106 audit(1696274318.261:165): pid=1199 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.267529 kernel: audit: type=1104 audit(1696274318.261:166): pid=1199 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.261000 audit[1199]: CRED_DISP pid=1199 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.267564 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:46240.service: Deactivated successfully. Oct 2 19:18:38.268005 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:18:38.269555 systemd-logind[1090]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:18:38.269633 kernel: audit: type=1106 audit(1696274318.264:167): pid=1196 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:18:38.264000 audit[1196]: USER_END pid=1196 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:18:38.270347 systemd-logind[1090]: Removed session 6. Oct 2 19:18:38.264000 audit[1196]: CRED_DISP pid=1196 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:18:38.274517 kernel: audit: type=1104 audit(1696274318.264:168): pid=1196 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:18:38.274566 kernel: audit: type=1130 audit(1696274318.264:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.130:22-10.0.0.1:46248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.130:22-10.0.0.1:46248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.130:22-10.0.0.1:46240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.300000 audit[1225]: USER_ACCT pid=1225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:18:38.301979 sshd[1225]: Accepted publickey for core from 10.0.0.1 port 46248 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:18:38.301000 audit[1225]: CRED_ACQ pid=1225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:18:38.301000 audit[1225]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2f70e3e0 a2=3 a3=0 items=0 ppid=1 pid=1225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:38.301000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:18:38.302814 sshd[1225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:38.305763 systemd-logind[1090]: New session 7 of user core. Oct 2 19:18:38.306512 systemd[1]: Started session-7.scope. Oct 2 19:18:38.308000 audit[1225]: USER_START pid=1225 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:18:38.309000 audit[1228]: CRED_ACQ pid=1228 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:18:38.355000 audit[1229]: USER_ACCT pid=1229 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.355000 audit[1229]: CRED_REFR pid=1229 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.356442 sudo[1229]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:18:38.356596 sudo[1229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:18:38.357000 audit[1229]: USER_START pid=1229 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:38.888281 systemd[1]: Reloading. Oct 2 19:18:38.947637 /usr/lib/systemd/system-generators/torcx-generator[1259]: time="2023-10-02T19:18:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:18:38.947662 /usr/lib/systemd/system-generators/torcx-generator[1259]: time="2023-10-02T19:18:38Z" level=info msg="torcx already run" Oct 2 19:18:39.352674 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:18:39.352690 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:18:39.371231 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:18:39.422000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.422000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.422000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.422000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.422000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.422000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.422000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.422000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.422000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.422000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.422000 audit: BPF prog-id=34 op=LOAD Oct 2 19:18:39.422000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit: BPF prog-id=35 op=LOAD Oct 2 19:18:39.423000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit: BPF prog-id=36 op=LOAD Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.423000 audit: BPF prog-id=37 op=LOAD Oct 2 19:18:39.423000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:18:39.423000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:18:39.424000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.424000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.424000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.424000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.424000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.424000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.424000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.424000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.424000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.424000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.424000 audit: BPF prog-id=38 op=LOAD Oct 2 19:18:39.424000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit: BPF prog-id=39 op=LOAD Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.425000 audit: BPF prog-id=40 op=LOAD Oct 2 19:18:39.425000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:18:39.425000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit: BPF prog-id=41 op=LOAD Oct 2 19:18:39.427000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit: BPF prog-id=42 op=LOAD Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.427000 audit: BPF prog-id=43 op=LOAD Oct 2 19:18:39.427000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:18:39.427000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit: BPF prog-id=44 op=LOAD Oct 2 19:18:39.428000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.428000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit: BPF prog-id=45 op=LOAD Oct 2 19:18:39.429000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit: BPF prog-id=46 op=LOAD Oct 2 19:18:39.429000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit: BPF prog-id=47 op=LOAD Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.429000 audit: BPF prog-id=48 op=LOAD Oct 2 19:18:39.429000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:18:39.429000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:18:39.437716 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:18:39.442082 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:18:39.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:39.442499 systemd[1]: Reached target network-online.target. Oct 2 19:18:39.443637 systemd[1]: Started kubelet.service. Oct 2 19:18:39.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:39.452062 systemd[1]: Starting coreos-metadata.service... Oct 2 19:18:39.457238 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:18:39.457362 systemd[1]: Finished coreos-metadata.service. Oct 2 19:18:39.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:39.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:39.488264 kubelet[1301]: E1002 19:18:39.488140 1301 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 2 19:18:39.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:18:39.490507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:18:39.490632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:18:39.690275 systemd[1]: Stopped kubelet.service. Oct 2 19:18:39.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:39.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:39.704133 systemd[1]: Reloading. Oct 2 19:18:39.763135 /usr/lib/systemd/system-generators/torcx-generator[1369]: time="2023-10-02T19:18:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:18:39.763163 /usr/lib/systemd/system-generators/torcx-generator[1369]: time="2023-10-02T19:18:39Z" level=info msg="torcx already run" Oct 2 19:18:39.820374 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:18:39.820390 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:18:39.839354 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:18:39.892000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.892000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.892000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.892000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.892000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.892000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.892000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.892000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.892000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit: BPF prog-id=49 op=LOAD Oct 2 19:18:39.893000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit: BPF prog-id=50 op=LOAD Oct 2 19:18:39.893000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit: BPF prog-id=51 op=LOAD Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.894000 audit: BPF prog-id=52 op=LOAD Oct 2 19:18:39.894000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:18:39.894000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit: BPF prog-id=53 op=LOAD Oct 2 19:18:39.895000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit: BPF prog-id=54 op=LOAD Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.895000 audit: BPF prog-id=55 op=LOAD Oct 2 19:18:39.895000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:18:39.895000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit: BPF prog-id=56 op=LOAD Oct 2 19:18:39.897000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.897000 audit: BPF prog-id=57 op=LOAD Oct 2 19:18:39.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit: BPF prog-id=58 op=LOAD Oct 2 19:18:39.898000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:18:39.898000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.898000 audit: BPF prog-id=59 op=LOAD Oct 2 19:18:39.898000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit: BPF prog-id=60 op=LOAD Oct 2 19:18:39.899000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit: BPF prog-id=61 op=LOAD Oct 2 19:18:39.899000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit: BPF prog-id=62 op=LOAD Oct 2 19:18:39.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.900000 audit: BPF prog-id=63 op=LOAD Oct 2 19:18:39.900000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:18:39.900000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:18:39.910613 systemd[1]: Started kubelet.service. Oct 2 19:18:39.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:39.950191 kubelet[1411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:18:39.950191 kubelet[1411]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:18:39.950191 kubelet[1411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:18:39.950191 kubelet[1411]: I1002 19:18:39.950138 1411 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:18:40.276682 kubelet[1411]: I1002 19:18:40.276565 1411 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Oct 2 19:18:40.276682 kubelet[1411]: I1002 19:18:40.276597 1411 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:18:40.276838 kubelet[1411]: I1002 19:18:40.276829 1411 server.go:895] "Client rotation is on, will bootstrap in background" Oct 2 19:18:40.278553 kubelet[1411]: I1002 19:18:40.278509 1411 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:18:40.283781 kubelet[1411]: I1002 19:18:40.283760 1411 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:18:40.283992 kubelet[1411]: I1002 19:18:40.283981 1411 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:18:40.284125 kubelet[1411]: I1002 19:18:40.284107 1411 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 2 19:18:40.284227 kubelet[1411]: I1002 19:18:40.284128 1411 topology_manager.go:138] "Creating topology manager with none policy" Oct 2 19:18:40.284227 kubelet[1411]: I1002 19:18:40.284147 1411 container_manager_linux.go:301] "Creating device plugin manager" Oct 2 19:18:40.284269 kubelet[1411]: I1002 19:18:40.284235 1411 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:18:40.284306 kubelet[1411]: I1002 19:18:40.284293 1411 kubelet.go:393] "Attempting to sync node with API server" Oct 2 19:18:40.284329 kubelet[1411]: I1002 19:18:40.284313 1411 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:18:40.284350 kubelet[1411]: I1002 19:18:40.284337 1411 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:18:40.284350 kubelet[1411]: I1002 19:18:40.284350 1411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:18:40.284436 kubelet[1411]: E1002 19:18:40.284426 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.284487 kubelet[1411]: E1002 19:18:40.284469 1411 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.284816 kubelet[1411]: I1002 19:18:40.284798 1411 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:18:40.285095 kubelet[1411]: W1002 19:18:40.285080 1411 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:18:40.285473 kubelet[1411]: I1002 19:18:40.285453 1411 server.go:1232] "Started kubelet" Oct 2 19:18:40.285520 kubelet[1411]: I1002 19:18:40.285509 1411 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:18:40.285626 kubelet[1411]: I1002 19:18:40.285601 1411 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:18:40.285949 kubelet[1411]: I1002 19:18:40.285917 1411 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 2 19:18:40.286188 kubelet[1411]: I1002 19:18:40.286164 1411 server.go:462] "Adding debug handlers to kubelet server" Oct 2 19:18:40.286244 kubelet[1411]: E1002 19:18:40.286229 1411 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:18:40.286277 kubelet[1411]: E1002 19:18:40.286248 1411 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:18:40.286000 audit[1411]: AVC avc: denied { mac_admin } for pid=1411 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:40.286000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:18:40.286000 audit[1411]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bff200 a1=c0002754b8 a2=c000bff1d0 a3=25 items=0 ppid=1 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.286000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:18:40.286000 audit[1411]: AVC avc: denied { mac_admin } for pid=1411 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:40.286000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:18:40.286000 audit[1411]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000511600 a1=c0002754d0 a2=c000bff290 a3=25 items=0 ppid=1 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.286000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:18:40.287348 kubelet[1411]: I1002 19:18:40.287104 1411 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:18:40.287348 kubelet[1411]: I1002 19:18:40.287143 1411 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:18:40.287509 kubelet[1411]: I1002 19:18:40.287489 1411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:18:40.287555 kubelet[1411]: I1002 19:18:40.287533 1411 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 2 19:18:40.287953 kubelet[1411]: I1002 19:18:40.287926 1411 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:18:40.287996 kubelet[1411]: I1002 19:18:40.287990 1411 reconciler_new.go:29] "Reconciler: start to sync state" Oct 2 19:18:40.288221 kubelet[1411]: E1002 19:18:40.288211 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:40.296576 kubelet[1411]: E1002 19:18:40.296555 1411 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.130\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:18:40.296753 kubelet[1411]: W1002 19:18:40.296730 1411 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:18:40.296840 kubelet[1411]: E1002 19:18:40.296826 1411 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:18:40.296958 kubelet[1411]: W1002 19:18:40.296926 1411 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:18:40.297036 kubelet[1411]: E1002 19:18:40.297023 1411 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:18:40.297524 kubelet[1411]: W1002 19:18:40.297495 1411 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:18:40.297571 kubelet[1411]: E1002 19:18:40.297531 1411 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:18:40.297650 kubelet[1411]: E1002 19:18:40.297573 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081ce3aeb92", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 285428626, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 285428626, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.299017 kubelet[1411]: E1002 19:18:40.298950 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081ce474806", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 286238726, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 286238726, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.308630 kubelet[1411]: I1002 19:18:40.308614 1411 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:18:40.308757 kubelet[1411]: I1002 19:18:40.308738 1411 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:18:40.308827 kubelet[1411]: I1002 19:18:40.308814 1411 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:18:40.309270 kubelet[1411]: E1002 19:18:40.309213 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf94f62c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308106796, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308106796, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.310062 kubelet[1411]: E1002 19:18:40.310012 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf95096d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308111725, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308111725, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.310647 kubelet[1411]: E1002 19:18:40.310603 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf9514e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308114661, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308114661, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.314000 audit[1428]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:40.314000 audit[1428]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc0ea92610 a2=0 a3=7ffc0ea925fc items=0 ppid=1411 pid=1428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.314000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:18:40.315000 audit[1431]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:40.315000 audit[1431]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc3beb4ab0 a2=0 a3=7ffc3beb4a9c items=0 ppid=1411 pid=1431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.315000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:18:40.317000 audit[1433]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:40.317000 audit[1433]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd3df871e0 a2=0 a3=7ffd3df871cc items=0 ppid=1411 pid=1433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:18:40.382000 audit[1438]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:40.382000 audit[1438]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffed3817d50 a2=0 a3=7ffed3817d3c items=0 ppid=1411 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:18:40.388919 kubelet[1411]: I1002 19:18:40.388899 1411 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.130" Oct 2 19:18:40.390238 kubelet[1411]: E1002 19:18:40.390206 1411 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.130" Oct 2 19:18:40.390238 kubelet[1411]: E1002 19:18:40.390177 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf94f62c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308106796, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 388851842, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events "10.0.0.130.178a6081cf94f62c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.390890 kubelet[1411]: E1002 19:18:40.390851 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf95096d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308111725, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 388859576, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events "10.0.0.130.178a6081cf95096d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.391565 kubelet[1411]: E1002 19:18:40.391521 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf9514e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308114661, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 388862231, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events "10.0.0.130.178a6081cf9514e5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.415000 audit[1443]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:40.415000 audit[1443]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffeac50d1a0 a2=0 a3=7ffeac50d18c items=0 ppid=1411 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:18:40.416329 kubelet[1411]: I1002 19:18:40.416278 1411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 2 19:18:40.415000 audit[1444]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=1444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:40.415000 audit[1444]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffefe1e3d10 a2=0 a3=7ffefe1e3cfc items=0 ppid=1411 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:18:40.417107 kubelet[1411]: I1002 19:18:40.417083 1411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 2 19:18:40.417107 kubelet[1411]: I1002 19:18:40.417113 1411 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 2 19:18:40.416000 audit[1445]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=1445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:40.416000 audit[1445]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd46c04310 a2=0 a3=7ffd46c042fc items=0 ppid=1411 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:18:40.417347 kubelet[1411]: I1002 19:18:40.417132 1411 kubelet.go:2303] "Starting kubelet main sync loop" Oct 2 19:18:40.417347 kubelet[1411]: E1002 19:18:40.417174 1411 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 2 19:18:40.416000 audit[1446]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=1446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:40.416000 audit[1446]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb5d99600 a2=0 a3=7ffcb5d995ec items=0 ppid=1411 pid=1446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:18:40.417000 audit[1447]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:40.417000 audit[1447]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc7e14f870 a2=0 a3=7ffc7e14f85c items=0 ppid=1411 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.417000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:18:40.418334 kubelet[1411]: W1002 19:18:40.418302 1411 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:18:40.418334 kubelet[1411]: E1002 19:18:40.418322 1411 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:18:40.417000 audit[1448]: NETFILTER_CFG table=nat:11 family=10 entries=2 op=nft_register_chain pid=1448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:40.417000 audit[1448]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd84e7c0f0 a2=0 a3=7ffd84e7c0dc items=0 ppid=1411 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.417000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:18:40.418000 audit[1449]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:40.418000 audit[1449]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddc2651f0 a2=0 a3=7ffddc2651dc items=0 ppid=1411 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:18:40.418000 audit[1450]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:40.418000 audit[1450]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffef84e5dc0 a2=0 a3=7ffef84e5dac items=0 ppid=1411 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.418000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:18:40.479972 kubelet[1411]: I1002 19:18:40.479952 1411 policy_none.go:49] "None policy: Start" Oct 2 19:18:40.480480 kubelet[1411]: I1002 19:18:40.480460 1411 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:18:40.480480 kubelet[1411]: I1002 19:18:40.480475 1411 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:18:40.497505 kubelet[1411]: E1002 19:18:40.497475 1411 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.130\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:18:40.517635 kubelet[1411]: E1002 19:18:40.517591 1411 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 2 19:18:40.591618 kubelet[1411]: I1002 19:18:40.591505 1411 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.130" Oct 2 19:18:40.592819 kubelet[1411]: E1002 19:18:40.592701 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf94f62c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308106796, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 591470204, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events "10.0.0.130.178a6081cf94f62c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.593629 kubelet[1411]: E1002 19:18:40.593547 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf95096d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308111725, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 591474522, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events "10.0.0.130.178a6081cf95096d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.593783 kubelet[1411]: E1002 19:18:40.593755 1411 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.130" Oct 2 19:18:40.594524 kubelet[1411]: E1002 19:18:40.594478 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf9514e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308114661, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 591477046, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events "10.0.0.130.178a6081cf9514e5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.604647 systemd[1]: Created slice kubepods.slice. Oct 2 19:18:40.608333 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:18:40.616697 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:18:40.617497 kubelet[1411]: I1002 19:18:40.617466 1411 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:18:40.616000 audit[1411]: AVC avc: denied { mac_admin } for pid=1411 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:40.616000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:18:40.616000 audit[1411]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e87e90 a1=c000eaaa80 a2=c000e87e60 a3=25 items=0 ppid=1 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:40.616000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:18:40.617722 kubelet[1411]: I1002 19:18:40.617536 1411 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:18:40.617722 kubelet[1411]: I1002 19:18:40.617706 1411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:18:40.618435 kubelet[1411]: E1002 19:18:40.618415 1411 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.130\" not found" Oct 2 19:18:40.624171 kubelet[1411]: E1002 19:18:40.624101 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081e25e9c38", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 623311928, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 623311928, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.899077 kubelet[1411]: E1002 19:18:40.898970 1411 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.130\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:18:40.995128 kubelet[1411]: I1002 19:18:40.995081 1411 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.130" Oct 2 19:18:40.996522 kubelet[1411]: E1002 19:18:40.996497 1411 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.130" Oct 2 19:18:40.996574 kubelet[1411]: E1002 19:18:40.996494 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf94f62c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308106796, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 995044640, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events "10.0.0.130.178a6081cf94f62c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.997468 kubelet[1411]: E1002 19:18:40.997419 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf95096d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308111725, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 995049509, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events "10.0.0.130.178a6081cf95096d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:40.998200 kubelet[1411]: E1002 19:18:40.998152 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.130.178a6081cf9514e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.130", UID:"10.0.0.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.130"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 308114661, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 40, 995053967, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.130"}': 'events "10.0.0.130.178a6081cf9514e5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:41.153443 kubelet[1411]: W1002 19:18:41.153325 1411 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:18:41.153443 kubelet[1411]: E1002 19:18:41.153359 1411 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:18:41.279020 kubelet[1411]: I1002 19:18:41.278929 1411 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:18:41.285153 kubelet[1411]: E1002 19:18:41.285106 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:41.652649 kubelet[1411]: E1002 19:18:41.652534 1411 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.130" not found Oct 2 19:18:41.703108 kubelet[1411]: E1002 19:18:41.703063 1411 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.130\" not found" node="10.0.0.130" Oct 2 19:18:41.797743 kubelet[1411]: I1002 19:18:41.797696 1411 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.130" Oct 2 19:18:41.803767 kubelet[1411]: I1002 19:18:41.803733 1411 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.130" Oct 2 19:18:41.819043 kubelet[1411]: E1002 19:18:41.818978 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:41.919970 kubelet[1411]: E1002 19:18:41.919848 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:41.923259 sudo[1229]: pam_unix(sudo:session): session closed for user root Oct 2 19:18:41.922000 audit[1229]: USER_END pid=1229 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:41.922000 audit[1229]: CRED_DISP pid=1229 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:41.924404 sshd[1225]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:41.924000 audit[1225]: USER_END pid=1225 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:18:41.924000 audit[1225]: CRED_DISP pid=1225 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:18:41.926424 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:46248.service: Deactivated successfully. Oct 2 19:18:41.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.130:22-10.0.0.1:46248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:41.927183 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:18:41.927720 systemd-logind[1090]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:18:41.928406 systemd-logind[1090]: Removed session 7. Oct 2 19:18:42.020038 kubelet[1411]: E1002 19:18:42.020002 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:42.120794 kubelet[1411]: E1002 19:18:42.120736 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:42.221481 kubelet[1411]: E1002 19:18:42.221356 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:42.285798 kubelet[1411]: E1002 19:18:42.285731 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:42.322409 kubelet[1411]: E1002 19:18:42.322346 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:42.423416 kubelet[1411]: E1002 19:18:42.423341 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:42.524252 kubelet[1411]: E1002 19:18:42.524085 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:42.625163 kubelet[1411]: E1002 19:18:42.625100 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:42.725996 kubelet[1411]: E1002 19:18:42.725858 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:42.826857 kubelet[1411]: E1002 19:18:42.826738 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:42.927609 kubelet[1411]: E1002 19:18:42.927531 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:43.028283 kubelet[1411]: E1002 19:18:43.028223 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" Oct 2 19:18:43.129620 kubelet[1411]: I1002 19:18:43.129501 1411 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:18:43.129950 env[1101]: time="2023-10-02T19:18:43.129886901Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:18:43.130344 kubelet[1411]: I1002 19:18:43.130048 1411 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:18:43.286418 kubelet[1411]: E1002 19:18:43.286358 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:43.286418 kubelet[1411]: I1002 19:18:43.286369 1411 apiserver.go:52] "Watching apiserver" Oct 2 19:18:43.289292 kubelet[1411]: I1002 19:18:43.289242 1411 topology_manager.go:215] "Topology Admit Handler" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" podNamespace="kube-system" podName="cilium-6mz8p" Oct 2 19:18:43.289494 kubelet[1411]: I1002 19:18:43.289407 1411 topology_manager.go:215] "Topology Admit Handler" podUID="c9cada68-d1b2-4ab8-ad97-10e4c6697eaf" podNamespace="kube-system" podName="kube-proxy-xj7v9" Oct 2 19:18:43.294725 systemd[1]: Created slice kubepods-besteffort-podc9cada68_d1b2_4ab8_ad97_10e4c6697eaf.slice. Oct 2 19:18:43.303110 systemd[1]: Created slice kubepods-burstable-podb36af421_0f93_45ab_a4ea_d3e88013f7f7.slice. Oct 2 19:18:43.388440 kubelet[1411]: I1002 19:18:43.388331 1411 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:18:43.407439 kubelet[1411]: I1002 19:18:43.407389 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66xcv\" (UniqueName: \"kubernetes.io/projected/b36af421-0f93-45ab-a4ea-d3e88013f7f7-kube-api-access-66xcv\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407439 kubelet[1411]: I1002 19:18:43.407439 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpggj\" (UniqueName: \"kubernetes.io/projected/c9cada68-d1b2-4ab8-ad97-10e4c6697eaf-kube-api-access-gpggj\") pod \"kube-proxy-xj7v9\" (UID: \"c9cada68-d1b2-4ab8-ad97-10e4c6697eaf\") " pod="kube-system/kube-proxy-xj7v9" Oct 2 19:18:43.407558 kubelet[1411]: I1002 19:18:43.407460 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-bpf-maps\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407558 kubelet[1411]: I1002 19:18:43.407523 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-xtables-lock\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407651 kubelet[1411]: I1002 19:18:43.407565 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b36af421-0f93-45ab-a4ea-d3e88013f7f7-clustermesh-secrets\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407691 kubelet[1411]: I1002 19:18:43.407634 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-lib-modules\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407716 kubelet[1411]: I1002 19:18:43.407696 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-config-path\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407742 kubelet[1411]: I1002 19:18:43.407719 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b36af421-0f93-45ab-a4ea-d3e88013f7f7-hubble-tls\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407767 kubelet[1411]: I1002 19:18:43.407753 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9cada68-d1b2-4ab8-ad97-10e4c6697eaf-xtables-lock\") pod \"kube-proxy-xj7v9\" (UID: \"c9cada68-d1b2-4ab8-ad97-10e4c6697eaf\") " pod="kube-system/kube-proxy-xj7v9" Oct 2 19:18:43.407819 kubelet[1411]: I1002 19:18:43.407793 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-run\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407848 kubelet[1411]: I1002 19:18:43.407832 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-hostproc\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407873 kubelet[1411]: I1002 19:18:43.407850 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cni-path\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407873 kubelet[1411]: I1002 19:18:43.407867 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-etc-cni-netd\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.407922 kubelet[1411]: I1002 19:18:43.407889 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9cada68-d1b2-4ab8-ad97-10e4c6697eaf-lib-modules\") pod \"kube-proxy-xj7v9\" (UID: \"c9cada68-d1b2-4ab8-ad97-10e4c6697eaf\") " pod="kube-system/kube-proxy-xj7v9" Oct 2 19:18:43.407966 kubelet[1411]: I1002 19:18:43.407924 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9cada68-d1b2-4ab8-ad97-10e4c6697eaf-kube-proxy\") pod \"kube-proxy-xj7v9\" (UID: \"c9cada68-d1b2-4ab8-ad97-10e4c6697eaf\") " pod="kube-system/kube-proxy-xj7v9" Oct 2 19:18:43.407991 kubelet[1411]: I1002 19:18:43.407967 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-cgroup\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.408027 kubelet[1411]: I1002 19:18:43.408015 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-host-proc-sys-net\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.408096 kubelet[1411]: I1002 19:18:43.408077 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-host-proc-sys-kernel\") pod \"cilium-6mz8p\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " pod="kube-system/cilium-6mz8p" Oct 2 19:18:43.601210 kubelet[1411]: E1002 19:18:43.601168 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:43.601979 env[1101]: time="2023-10-02T19:18:43.601918679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xj7v9,Uid:c9cada68-d1b2-4ab8-ad97-10e4c6697eaf,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:43.615675 kubelet[1411]: E1002 19:18:43.615638 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:43.616177 env[1101]: time="2023-10-02T19:18:43.616128458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mz8p,Uid:b36af421-0f93-45ab-a4ea-d3e88013f7f7,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:44.266958 env[1101]: time="2023-10-02T19:18:44.266895578Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:44.269076 env[1101]: time="2023-10-02T19:18:44.269022717Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:44.271122 env[1101]: time="2023-10-02T19:18:44.271040872Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:44.272573 env[1101]: time="2023-10-02T19:18:44.272538611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:44.274125 env[1101]: time="2023-10-02T19:18:44.274102163Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:44.276398 env[1101]: time="2023-10-02T19:18:44.276365888Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:44.278125 env[1101]: time="2023-10-02T19:18:44.278105531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:44.279460 env[1101]: time="2023-10-02T19:18:44.279424234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:44.286692 kubelet[1411]: E1002 19:18:44.286650 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:44.298503 env[1101]: time="2023-10-02T19:18:44.298432251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:44.298503 env[1101]: time="2023-10-02T19:18:44.298467777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:44.298503 env[1101]: time="2023-10-02T19:18:44.298477315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:44.298748 env[1101]: time="2023-10-02T19:18:44.298588303Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/445004cc5fb3e99f13ef6a2cacde64341557ed0dd8bfe680be71d2a22856f730 pid=1471 runtime=io.containerd.runc.v2 Oct 2 19:18:44.298748 env[1101]: time="2023-10-02T19:18:44.298639419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:44.298748 env[1101]: time="2023-10-02T19:18:44.298698760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:44.298748 env[1101]: time="2023-10-02T19:18:44.298719590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:44.298865 env[1101]: time="2023-10-02T19:18:44.298830968Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a pid=1475 runtime=io.containerd.runc.v2 Oct 2 19:18:44.309782 systemd[1]: Started cri-containerd-177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a.scope. Oct 2 19:18:44.315642 systemd[1]: Started cri-containerd-445004cc5fb3e99f13ef6a2cacde64341557ed0dd8bfe680be71d2a22856f730.scope. Oct 2 19:18:44.317000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.319274 kernel: kauditd_printk_skb: 416 callbacks suppressed Oct 2 19:18:44.319351 kernel: audit: type=1400 audit(1696274324.317:551): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.317000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.330980 kernel: audit: type=1400 audit(1696274324.317:552): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.331085 kernel: audit: type=1400 audit(1696274324.317:553): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.331110 kernel: audit: type=1400 audit(1696274324.317:554): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.331140 kernel: audit: type=1400 audit(1696274324.317:555): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.331162 kernel: audit: type=1400 audit(1696274324.317:556): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.331184 kernel: audit: type=1400 audit(1696274324.317:557): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.331203 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:18:44.317000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.317000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.317000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.317000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.317000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.317000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334707 kernel: audit: type=1400 audit(1696274324.317:558): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334779 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:18:44.317000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit: BPF prog-id=64 op=LOAD Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=1475 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137376339376133343033323466646661353461306534313265656463 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=1475 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137376339376133343033323466646661353461306534313265656463 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.320000 audit: BPF prog-id=65 op=LOAD Oct 2 19:18:44.320000 audit[1492]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c000356780 items=0 ppid=1475 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137376339376133343033323466646661353461306534313265656463 Oct 2 19:18:44.324000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.324000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.324000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.324000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.324000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.324000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.324000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.324000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.324000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.324000 audit: BPF prog-id=66 op=LOAD Oct 2 19:18:44.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit: BPF prog-id=67 op=LOAD Oct 2 19:18:44.324000 audit[1492]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c0003567c8 items=0 ppid=1475 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137376339376133343033323466646661353461306534313265656463 Oct 2 19:18:44.329000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:18:44.329000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:18:44.329000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit[1492]: AVC avc: denied { perfmon } for pid=1492 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit[1494]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1471 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434353030346363356662336539396631336566366132636163646536 Oct 2 19:18:44.329000 audit[1492]: AVC avc: denied { bpf } for pid=1492 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.329000 audit: BPF prog-id=68 op=LOAD Oct 2 19:18:44.329000 audit[1492]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c000356bd8 items=0 ppid=1475 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137376339376133343033323466646661353461306534313265656463 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit: BPF prog-id=69 op=LOAD Oct 2 19:18:44.334000 audit[1494]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0003c1050 items=0 ppid=1471 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434353030346363356662336539396631336566366132636163646536 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit: BPF prog-id=70 op=LOAD Oct 2 19:18:44.334000 audit[1494]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0003c1098 items=0 ppid=1471 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434353030346363356662336539396631336566366132636163646536 Oct 2 19:18:44.334000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:18:44.334000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { perfmon } for pid=1494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit[1494]: AVC avc: denied { bpf } for pid=1494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.334000 audit: BPF prog-id=71 op=LOAD Oct 2 19:18:44.334000 audit[1494]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0003c14a8 items=0 ppid=1471 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434353030346363356662336539396631336566366132636163646536 Oct 2 19:18:44.348195 env[1101]: time="2023-10-02T19:18:44.348154758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mz8p,Uid:b36af421-0f93-45ab-a4ea-d3e88013f7f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\"" Oct 2 19:18:44.349057 kubelet[1411]: E1002 19:18:44.349024 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:44.350375 env[1101]: time="2023-10-02T19:18:44.350324858Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:18:44.351604 env[1101]: time="2023-10-02T19:18:44.351577938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xj7v9,Uid:c9cada68-d1b2-4ab8-ad97-10e4c6697eaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"445004cc5fb3e99f13ef6a2cacde64341557ed0dd8bfe680be71d2a22856f730\"" Oct 2 19:18:44.352159 kubelet[1411]: E1002 19:18:44.352129 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:44.514991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2643523179.mount: Deactivated successfully. Oct 2 19:18:45.286913 kubelet[1411]: E1002 19:18:45.286878 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:46.287363 kubelet[1411]: E1002 19:18:46.287320 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:47.288357 kubelet[1411]: E1002 19:18:47.288311 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:48.288425 kubelet[1411]: E1002 19:18:48.288394 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:48.825193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9411239.mount: Deactivated successfully. Oct 2 19:18:49.289188 kubelet[1411]: E1002 19:18:49.289072 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:50.289833 kubelet[1411]: E1002 19:18:50.289776 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:51.290307 kubelet[1411]: E1002 19:18:51.290269 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:52.290677 kubelet[1411]: E1002 19:18:52.290621 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:52.349690 env[1101]: time="2023-10-02T19:18:52.349622166Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:52.351885 env[1101]: time="2023-10-02T19:18:52.351846808Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:52.353751 env[1101]: time="2023-10-02T19:18:52.353722906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:52.354443 env[1101]: time="2023-10-02T19:18:52.354410245Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 2 19:18:52.355174 env[1101]: time="2023-10-02T19:18:52.355057559Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\"" Oct 2 19:18:52.358600 env[1101]: time="2023-10-02T19:18:52.358528759Z" level=info msg="CreateContainer within sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:18:52.374579 env[1101]: time="2023-10-02T19:18:52.374509679Z" level=info msg="CreateContainer within sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\"" Oct 2 19:18:52.375296 env[1101]: time="2023-10-02T19:18:52.375268081Z" level=info msg="StartContainer for \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\"" Oct 2 19:18:52.394131 systemd[1]: Started cri-containerd-067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7.scope. Oct 2 19:18:52.402280 systemd[1]: cri-containerd-067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7.scope: Deactivated successfully. Oct 2 19:18:52.402651 systemd[1]: Stopped cri-containerd-067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7.scope. Oct 2 19:18:52.405952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7-rootfs.mount: Deactivated successfully. Oct 2 19:18:52.806758 env[1101]: time="2023-10-02T19:18:52.806699128Z" level=info msg="shim disconnected" id=067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7 Oct 2 19:18:52.806758 env[1101]: time="2023-10-02T19:18:52.806756005Z" level=warning msg="cleaning up after shim disconnected" id=067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7 namespace=k8s.io Oct 2 19:18:52.806758 env[1101]: time="2023-10-02T19:18:52.806766134Z" level=info msg="cleaning up dead shim" Oct 2 19:18:52.813143 env[1101]: time="2023-10-02T19:18:52.813103489Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1577 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:52Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:52.813443 env[1101]: time="2023-10-02T19:18:52.813336836Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Oct 2 19:18:52.813575 env[1101]: time="2023-10-02T19:18:52.813536050Z" level=error msg="Failed to pipe stdout of container \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\"" error="reading from a closed fifo" Oct 2 19:18:52.813843 env[1101]: time="2023-10-02T19:18:52.813815424Z" level=error msg="Failed to pipe stderr of container \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\"" error="reading from a closed fifo" Oct 2 19:18:52.815834 env[1101]: time="2023-10-02T19:18:52.815779707Z" level=error msg="StartContainer for \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:52.816090 kubelet[1411]: E1002 19:18:52.816060 1411 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7" Oct 2 19:18:52.816203 kubelet[1411]: E1002 19:18:52.816189 1411 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:52.816203 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:52.816203 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:18:52.816353 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-66xcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:52.816353 kubelet[1411]: E1002 19:18:52.816228 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:18:53.291015 kubelet[1411]: E1002 19:18:53.290887 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.439338 kubelet[1411]: E1002 19:18:53.439313 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:53.441003 env[1101]: time="2023-10-02T19:18:53.440963380Z" level=info msg="CreateContainer within sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:18:53.452674 env[1101]: time="2023-10-02T19:18:53.452553826Z" level=info msg="CreateContainer within sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\"" Oct 2 19:18:53.453193 env[1101]: time="2023-10-02T19:18:53.453172837Z" level=info msg="StartContainer for \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\"" Oct 2 19:18:53.636296 systemd[1]: Started cri-containerd-a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916.scope. Oct 2 19:18:53.661739 systemd[1]: cri-containerd-a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916.scope: Deactivated successfully. Oct 2 19:18:53.691310 env[1101]: time="2023-10-02T19:18:53.691259204Z" level=info msg="shim disconnected" id=a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916 Oct 2 19:18:53.691310 env[1101]: time="2023-10-02T19:18:53.691307595Z" level=warning msg="cleaning up after shim disconnected" id=a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916 namespace=k8s.io Oct 2 19:18:53.691310 env[1101]: time="2023-10-02T19:18:53.691316612Z" level=info msg="cleaning up dead shim" Oct 2 19:18:53.697488 env[1101]: time="2023-10-02T19:18:53.697437030Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1615 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:53Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:53.697728 env[1101]: time="2023-10-02T19:18:53.697671459Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:18:53.697926 env[1101]: time="2023-10-02T19:18:53.697859843Z" level=error msg="Failed to pipe stdout of container \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\"" error="reading from a closed fifo" Oct 2 19:18:53.698096 env[1101]: time="2023-10-02T19:18:53.697878888Z" level=error msg="Failed to pipe stderr of container \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\"" error="reading from a closed fifo" Oct 2 19:18:53.701342 env[1101]: time="2023-10-02T19:18:53.701281179Z" level=error msg="StartContainer for \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:53.701591 kubelet[1411]: E1002 19:18:53.701562 1411 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916" Oct 2 19:18:53.701706 kubelet[1411]: E1002 19:18:53.701689 1411 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:53.701706 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:53.701706 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:18:53.701706 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-66xcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:53.701889 kubelet[1411]: E1002 19:18:53.701731 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:18:54.291955 kubelet[1411]: E1002 19:18:54.291882 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:54.432394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916-rootfs.mount: Deactivated successfully. Oct 2 19:18:54.432494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206879201.mount: Deactivated successfully. Oct 2 19:18:54.433299 env[1101]: time="2023-10-02T19:18:54.433239562Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:54.435444 env[1101]: time="2023-10-02T19:18:54.435406596Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:54.436911 env[1101]: time="2023-10-02T19:18:54.436878276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:54.438567 env[1101]: time="2023-10-02T19:18:54.438429144Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:54.438917 env[1101]: time="2023-10-02T19:18:54.438828523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\" returns image reference \"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0\"" Oct 2 19:18:54.440847 env[1101]: time="2023-10-02T19:18:54.440817353Z" level=info msg="CreateContainer within sandbox \"445004cc5fb3e99f13ef6a2cacde64341557ed0dd8bfe680be71d2a22856f730\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:18:54.441751 kubelet[1411]: I1002 19:18:54.441723 1411 scope.go:117] "RemoveContainer" containerID="067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7" Oct 2 19:18:54.441998 kubelet[1411]: I1002 19:18:54.441952 1411 scope.go:117] "RemoveContainer" containerID="067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7" Oct 2 19:18:54.443472 env[1101]: time="2023-10-02T19:18:54.443437036Z" level=info msg="RemoveContainer for \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\"" Oct 2 19:18:54.443778 env[1101]: time="2023-10-02T19:18:54.443577129Z" level=info msg="RemoveContainer for \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\"" Oct 2 19:18:54.443778 env[1101]: time="2023-10-02T19:18:54.443617194Z" level=error msg="RemoveContainer for \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\" failed" error="failed to set removing state for container \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\": container is already in removing state" Oct 2 19:18:54.443844 kubelet[1411]: E1002 19:18:54.443751 1411 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\": container is already in removing state" containerID="067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7" Oct 2 19:18:54.443844 kubelet[1411]: E1002 19:18:54.443790 1411 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7": container is already in removing state; Skipping pod "cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)" Oct 2 19:18:54.443844 kubelet[1411]: E1002 19:18:54.443845 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:54.444054 kubelet[1411]: E1002 19:18:54.444034 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:18:54.448194 env[1101]: time="2023-10-02T19:18:54.448153441Z" level=info msg="RemoveContainer for \"067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7\" returns successfully" Oct 2 19:18:54.455554 env[1101]: time="2023-10-02T19:18:54.455495360Z" level=info msg="CreateContainer within sandbox \"445004cc5fb3e99f13ef6a2cacde64341557ed0dd8bfe680be71d2a22856f730\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb9a6607abd6ee68b355180de7c59796ce038e9e2d42091a90f5b3c05fb3e1ee\"" Oct 2 19:18:54.456071 env[1101]: time="2023-10-02T19:18:54.456001940Z" level=info msg="StartContainer for \"fb9a6607abd6ee68b355180de7c59796ce038e9e2d42091a90f5b3c05fb3e1ee\"" Oct 2 19:18:54.474637 systemd[1]: Started cri-containerd-fb9a6607abd6ee68b355180de7c59796ce038e9e2d42091a90f5b3c05fb3e1ee.scope. Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.493782 kernel: kauditd_printk_skb: 104 callbacks suppressed Oct 2 19:18:54.493857 kernel: audit: type=1400 audit(1696274334.490:586): avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.493881 kernel: audit: type=1300 audit(1696274334.490:586): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=1471 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.490000 audit[1634]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=1471 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662396136363037616264366565363862333535313830646537633539 Oct 2 19:18:54.499203 kernel: audit: type=1327 audit(1696274334.490:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662396136363037616264366565363862333535313830646537633539 Oct 2 19:18:54.499237 kernel: audit: type=1400 audit(1696274334.490:587): avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.501090 kernel: audit: type=1400 audit(1696274334.490:587): avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.502907 kernel: audit: type=1400 audit(1696274334.490:587): avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.506644 kernel: audit: type=1400 audit(1696274334.490:587): avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.508777 kernel: audit: type=1400 audit(1696274334.490:587): avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.510693 kernel: audit: type=1400 audit(1696274334.490:587): avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.512694 kernel: audit: type=1400 audit(1696274334.490:587): avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.490000 audit: BPF prog-id=72 op=LOAD Oct 2 19:18:54.490000 audit[1634]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001459d8 a2=78 a3=c0003600a0 items=0 ppid=1471 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662396136363037616264366565363862333535313830646537633539 Oct 2 19:18:54.492000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.492000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.492000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.492000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.492000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.492000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.492000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.492000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.492000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.492000 audit: BPF prog-id=73 op=LOAD Oct 2 19:18:54.492000 audit[1634]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000145770 a2=78 a3=c0003600e8 items=0 ppid=1471 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662396136363037616264366565363862333535313830646537633539 Oct 2 19:18:54.498000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:18:54.498000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:18:54.498000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.498000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.498000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.498000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.498000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.498000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.498000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.498000 audit[1634]: AVC avc: denied { perfmon } for pid=1634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.498000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.498000 audit[1634]: AVC avc: denied { bpf } for pid=1634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.498000 audit: BPF prog-id=74 op=LOAD Oct 2 19:18:54.498000 audit[1634]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000145c30 a2=78 a3=c000360178 items=0 ppid=1471 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662396136363037616264366565363862333535313830646537633539 Oct 2 19:18:54.520177 env[1101]: time="2023-10-02T19:18:54.520141144Z" level=info msg="StartContainer for \"fb9a6607abd6ee68b355180de7c59796ce038e9e2d42091a90f5b3c05fb3e1ee\" returns successfully" Oct 2 19:18:54.575000 audit[1685]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.575000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc55e63c0 a2=0 a3=7ffcc55e63ac items=0 ppid=1644 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:18:54.576000 audit[1687]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_chain pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.576000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb87b89e0 a2=0 a3=7ffeb87b89cc items=0 ppid=1644 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.576000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:18:54.577000 audit[1686]: NETFILTER_CFG table=mangle:16 family=10 entries=1 op=nft_register_chain pid=1686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.577000 audit[1686]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1c0339b0 a2=0 a3=7fff1c03399c items=0 ppid=1644 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:18:54.578000 audit[1688]: NETFILTER_CFG table=nat:17 family=10 entries=1 op=nft_register_chain pid=1688 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.578000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe79db9e00 a2=0 a3=7ffe79db9dec items=0 ppid=1644 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:18:54.578000 audit[1689]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_chain pid=1689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.578000 audit[1689]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd670a4d80 a2=0 a3=7ffd670a4d6c items=0 ppid=1644 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:18:54.579000 audit[1690]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1690 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.579000 audit[1690]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0aa21ad0 a2=0 a3=7ffd0aa21abc items=0 ppid=1644 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:18:54.677000 audit[1691]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1691 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.677000 audit[1691]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc03c91d70 a2=0 a3=7ffc03c91d5c items=0 ppid=1644 pid=1691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:18:54.680000 audit[1693]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.680000 audit[1693]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff4face2a0 a2=0 a3=7fff4face28c items=0 ppid=1644 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.680000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:18:54.683000 audit[1696]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.683000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcdda38bf0 a2=0 a3=7ffcdda38bdc items=0 ppid=1644 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:18:54.684000 audit[1697]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.684000 audit[1697]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddc58bd20 a2=0 a3=7ffddc58bd0c items=0 ppid=1644 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:18:54.686000 audit[1699]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.686000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff503f0db0 a2=0 a3=7fff503f0d9c items=0 ppid=1644 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:18:54.687000 audit[1700]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.687000 audit[1700]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff18c3c1f0 a2=0 a3=7fff18c3c1dc items=0 ppid=1644 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.687000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:18:54.689000 audit[1702]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.689000 audit[1702]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe81ffbcd0 a2=0 a3=7ffe81ffbcbc items=0 ppid=1644 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:18:54.692000 audit[1705]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.692000 audit[1705]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc5b89f340 a2=0 a3=7ffc5b89f32c items=0 ppid=1644 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:18:54.693000 audit[1706]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1706 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.693000 audit[1706]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc403ab610 a2=0 a3=7ffc403ab5fc items=0 ppid=1644 pid=1706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:18:54.696000 audit[1708]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.696000 audit[1708]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc3cc42770 a2=0 a3=7ffc3cc4275c items=0 ppid=1644 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:18:54.697000 audit[1709]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1709 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.697000 audit[1709]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2fd85010 a2=0 a3=7ffe2fd84ffc items=0 ppid=1644 pid=1709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:18:54.699000 audit[1711]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.699000 audit[1711]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffceea797a0 a2=0 a3=7ffceea7978c items=0 ppid=1644 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:18:54.702000 audit[1714]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.702000 audit[1714]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc2c28ce30 a2=0 a3=7ffc2c28ce1c items=0 ppid=1644 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:18:54.706000 audit[1717]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.706000 audit[1717]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc077305b0 a2=0 a3=7ffc0773059c items=0 ppid=1644 pid=1717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:18:54.707000 audit[1718]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1718 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.707000 audit[1718]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc1f796550 a2=0 a3=7ffc1f79653c items=0 ppid=1644 pid=1718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:18:54.708000 audit[1720]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.708000 audit[1720]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd06913070 a2=0 a3=7ffd0691305c items=0 ppid=1644 pid=1720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:18:54.749000 audit[1725]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.749000 audit[1725]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc1e933f00 a2=0 a3=7ffc1e933eec items=0 ppid=1644 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:18:54.750000 audit[1726]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1726 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.750000 audit[1726]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6838cd50 a2=0 a3=7fff6838cd3c items=0 ppid=1644 pid=1726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:18:54.752000 audit[1728]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1728 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:54.752000 audit[1728]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffda99aa5a0 a2=0 a3=7ffda99aa58c items=0 ppid=1644 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:18:54.767000 audit[1734]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:18:54.767000 audit[1734]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7fffaa287ac0 a2=0 a3=7fffaa287aac items=0 ppid=1644 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:18:54.780000 audit[1734]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:18:54.780000 audit[1734]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fffaa287ac0 a2=0 a3=7fffaa287aac items=0 ppid=1644 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.780000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:18:54.782000 audit[1740]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.782000 audit[1740]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffde590abd0 a2=0 a3=7ffde590abbc items=0 ppid=1644 pid=1740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:18:54.785000 audit[1742]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.785000 audit[1742]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc4713ff70 a2=0 a3=7ffc4713ff5c items=0 ppid=1644 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.785000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:18:54.789000 audit[1745]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1745 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.789000 audit[1745]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc79b02560 a2=0 a3=7ffc79b0254c items=0 ppid=1644 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.789000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:18:54.790000 audit[1746]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1746 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.790000 audit[1746]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebc2ddd20 a2=0 a3=7ffebc2ddd0c items=0 ppid=1644 pid=1746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:18:54.792000 audit[1748]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1748 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.792000 audit[1748]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee868d1e0 a2=0 a3=7ffee868d1cc items=0 ppid=1644 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:18:54.793000 audit[1749]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1749 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.793000 audit[1749]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2a24fd20 a2=0 a3=7ffc2a24fd0c items=0 ppid=1644 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:18:54.795000 audit[1751]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.795000 audit[1751]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffa204bf60 a2=0 a3=7fffa204bf4c items=0 ppid=1644 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:18:54.799000 audit[1754]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.799000 audit[1754]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd42277f30 a2=0 a3=7ffd42277f1c items=0 ppid=1644 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:18:54.800000 audit[1755]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.800000 audit[1755]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5ead1df0 a2=0 a3=7ffe5ead1ddc items=0 ppid=1644 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:18:54.802000 audit[1757]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.802000 audit[1757]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe3e4b9580 a2=0 a3=7ffe3e4b956c items=0 ppid=1644 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:18:54.803000 audit[1758]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1758 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.803000 audit[1758]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc02298380 a2=0 a3=7ffc0229836c items=0 ppid=1644 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.803000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:18:54.805000 audit[1760]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1760 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.805000 audit[1760]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd30030040 a2=0 a3=7ffd3003002c items=0 ppid=1644 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:18:54.807000 audit[1763]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1763 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.807000 audit[1763]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb1e75900 a2=0 a3=7fffb1e758ec items=0 ppid=1644 pid=1763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:18:54.810000 audit[1766]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.810000 audit[1766]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8b6ccbf0 a2=0 a3=7ffe8b6ccbdc items=0 ppid=1644 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:18:54.811000 audit[1767]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.811000 audit[1767]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff96adfd30 a2=0 a3=7fff96adfd1c items=0 ppid=1644 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:18:54.813000 audit[1769]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1769 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.813000 audit[1769]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffee0e447b0 a2=0 a3=7ffee0e4479c items=0 ppid=1644 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:18:54.816000 audit[1772]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.816000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd715a3780 a2=0 a3=7ffd715a376c items=0 ppid=1644 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:18:54.817000 audit[1773]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=1773 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.817000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe509df8f0 a2=0 a3=7ffe509df8dc items=0 ppid=1644 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:18:54.818000 audit[1775]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.818000 audit[1775]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffefd303f10 a2=0 a3=7ffefd303efc items=0 ppid=1644 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:18:54.819000 audit[1776]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.819000 audit[1776]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc57010fc0 a2=0 a3=7ffc57010fac items=0 ppid=1644 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:18:54.821000 audit[1778]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_rule pid=1778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.821000 audit[1778]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcebbde010 a2=0 a3=7ffcebbddffc items=0 ppid=1644 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:18:54.823000 audit[1781]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_rule pid=1781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:54.823000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe2fbeb6e0 a2=0 a3=7ffe2fbeb6cc items=0 ppid=1644 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.823000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:18:54.825000 audit[1783]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:18:54.825000 audit[1783]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff8892dad0 a2=0 a3=7fff8892dabc items=0 ppid=1644 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.825000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:18:54.826000 audit[1783]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:18:54.826000 audit[1783]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7fff8892dad0 a2=0 a3=7fff8892dabc items=0 ppid=1644 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.826000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:18:55.292607 kubelet[1411]: E1002 19:18:55.292472 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:55.432434 systemd[1]: run-containerd-runc-k8s.io-fb9a6607abd6ee68b355180de7c59796ce038e9e2d42091a90f5b3c05fb3e1ee-runc.jWPfKg.mount: Deactivated successfully. Oct 2 19:18:55.445147 kubelet[1411]: E1002 19:18:55.444897 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:55.445147 kubelet[1411]: E1002 19:18:55.445124 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:18:55.446244 kubelet[1411]: E1002 19:18:55.446219 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:55.720090 kubelet[1411]: I1002 19:18:55.719967 1411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xj7v9" podStartSLOduration=4.633214635 podCreationTimestamp="2023-10-02 19:18:41 +0000 UTC" firstStartedPulling="2023-10-02 19:18:44.352450113 +0000 UTC m=+4.438528887" lastFinishedPulling="2023-10-02 19:18:54.439135709 +0000 UTC m=+14.525214482" observedRunningTime="2023-10-02 19:18:55.719816233 +0000 UTC m=+15.805895006" watchObservedRunningTime="2023-10-02 19:18:55.71990023 +0000 UTC m=+15.805979003" Oct 2 19:18:55.911416 kubelet[1411]: W1002 19:18:55.911374 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb36af421_0f93_45ab_a4ea_d3e88013f7f7.slice/cri-containerd-067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7.scope WatchSource:0}: container "067d73c50fa19d2d530683ef9e683fdb110985447cda15047982782e8db6dfc7" in namespace "k8s.io": not found Oct 2 19:18:56.293229 kubelet[1411]: E1002 19:18:56.293165 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:56.447595 kubelet[1411]: E1002 19:18:56.447558 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:57.294343 kubelet[1411]: E1002 19:18:57.294273 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:58.295130 kubelet[1411]: E1002 19:18:58.295056 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:59.016434 kubelet[1411]: W1002 19:18:59.016389 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb36af421_0f93_45ab_a4ea_d3e88013f7f7.slice/cri-containerd-a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916.scope WatchSource:0}: task a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916 not found: not found Oct 2 19:18:59.295750 kubelet[1411]: E1002 19:18:59.295625 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:00.284844 kubelet[1411]: E1002 19:19:00.284802 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:00.296231 kubelet[1411]: E1002 19:19:00.296165 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:01.296470 kubelet[1411]: E1002 19:19:01.296414 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:02.296886 kubelet[1411]: E1002 19:19:02.296813 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:03.297126 kubelet[1411]: E1002 19:19:03.297072 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:04.297907 kubelet[1411]: E1002 19:19:04.297856 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:05.298970 kubelet[1411]: E1002 19:19:05.298908 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:06.299230 kubelet[1411]: E1002 19:19:06.299173 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:07.300127 kubelet[1411]: E1002 19:19:07.300072 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:08.300897 kubelet[1411]: E1002 19:19:08.300851 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:09.301636 kubelet[1411]: E1002 19:19:09.301595 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:09.418720 kubelet[1411]: E1002 19:19:09.418688 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:09.420515 env[1101]: time="2023-10-02T19:19:09.420478699Z" level=info msg="CreateContainer within sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:19:09.433571 env[1101]: time="2023-10-02T19:19:09.433519554Z" level=info msg="CreateContainer within sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\"" Oct 2 19:19:09.433911 env[1101]: time="2023-10-02T19:19:09.433886648Z" level=info msg="StartContainer for \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\"" Oct 2 19:19:09.453477 systemd[1]: Started cri-containerd-72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef.scope. Oct 2 19:19:09.468123 systemd[1]: cri-containerd-72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef.scope: Deactivated successfully. Oct 2 19:19:09.721339 env[1101]: time="2023-10-02T19:19:09.721263847Z" level=info msg="shim disconnected" id=72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef Oct 2 19:19:09.721339 env[1101]: time="2023-10-02T19:19:09.721339331Z" level=warning msg="cleaning up after shim disconnected" id=72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef namespace=k8s.io Oct 2 19:19:09.721597 env[1101]: time="2023-10-02T19:19:09.721351935Z" level=info msg="cleaning up dead shim" Oct 2 19:19:09.752834 env[1101]: time="2023-10-02T19:19:09.752766633Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1808 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:09.753118 env[1101]: time="2023-10-02T19:19:09.753040599Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:19:09.753344 env[1101]: time="2023-10-02T19:19:09.753258255Z" level=error msg="Failed to pipe stdout of container \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\"" error="reading from a closed fifo" Oct 2 19:19:09.753344 env[1101]: time="2023-10-02T19:19:09.753274276Z" level=error msg="Failed to pipe stderr of container \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\"" error="reading from a closed fifo" Oct 2 19:19:09.757287 env[1101]: time="2023-10-02T19:19:09.757232500Z" level=error msg="StartContainer for \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:09.757563 kubelet[1411]: E1002 19:19:09.757536 1411 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef" Oct 2 19:19:09.757694 kubelet[1411]: E1002 19:19:09.757677 1411 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:09.757694 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:09.757694 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:19:09.757694 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-66xcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:09.757849 kubelet[1411]: E1002 19:19:09.757731 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:19:10.302288 kubelet[1411]: E1002 19:19:10.302233 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:10.429929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef-rootfs.mount: Deactivated successfully. Oct 2 19:19:10.474913 kubelet[1411]: I1002 19:19:10.474892 1411 scope.go:117] "RemoveContainer" containerID="a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916" Oct 2 19:19:10.475323 kubelet[1411]: I1002 19:19:10.475292 1411 scope.go:117] "RemoveContainer" containerID="a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916" Oct 2 19:19:10.476009 env[1101]: time="2023-10-02T19:19:10.475973496Z" level=info msg="RemoveContainer for \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\"" Oct 2 19:19:10.476461 env[1101]: time="2023-10-02T19:19:10.476431543Z" level=info msg="RemoveContainer for \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\"" Oct 2 19:19:10.476549 env[1101]: time="2023-10-02T19:19:10.476517127Z" level=error msg="RemoveContainer for \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\" failed" error="failed to set removing state for container \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\": container is already in removing state" Oct 2 19:19:10.476661 kubelet[1411]: E1002 19:19:10.476645 1411 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\": container is already in removing state" containerID="a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916" Oct 2 19:19:10.476713 kubelet[1411]: E1002 19:19:10.476670 1411 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916": container is already in removing state; Skipping pod "cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)" Oct 2 19:19:10.476755 kubelet[1411]: E1002 19:19:10.476722 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:10.476947 kubelet[1411]: E1002 19:19:10.476910 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:19:10.557093 env[1101]: time="2023-10-02T19:19:10.556980015Z" level=info msg="RemoveContainer for \"a61d420a7b0734e7f3a222474a2017a92c792203565afc070c5f3c2d70286916\" returns successfully" Oct 2 19:19:11.302683 kubelet[1411]: E1002 19:19:11.302616 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:12.303720 kubelet[1411]: E1002 19:19:12.303668 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:12.826172 kubelet[1411]: W1002 19:19:12.826120 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb36af421_0f93_45ab_a4ea_d3e88013f7f7.slice/cri-containerd-72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef.scope WatchSource:0}: task 72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef not found: not found Oct 2 19:19:13.304853 kubelet[1411]: E1002 19:19:13.304709 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:14.305314 kubelet[1411]: E1002 19:19:14.305277 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:15.305755 kubelet[1411]: E1002 19:19:15.305710 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:16.305859 kubelet[1411]: E1002 19:19:16.305811 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:17.306754 kubelet[1411]: E1002 19:19:17.306695 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:17.890686 update_engine[1094]: I1002 19:19:17.890635 1094 update_attempter.cc:505] Updating boot flags... Oct 2 19:19:18.307451 kubelet[1411]: E1002 19:19:18.307301 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:19.307587 kubelet[1411]: E1002 19:19:19.307527 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:20.285239 kubelet[1411]: E1002 19:19:20.285190 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:20.308417 kubelet[1411]: E1002 19:19:20.308381 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:21.308926 kubelet[1411]: E1002 19:19:21.308862 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:22.309443 kubelet[1411]: E1002 19:19:22.309398 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:23.310017 kubelet[1411]: E1002 19:19:23.309976 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:24.310486 kubelet[1411]: E1002 19:19:24.310435 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:25.312076 kubelet[1411]: E1002 19:19:25.310959 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:25.418186 kubelet[1411]: E1002 19:19:25.418158 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:25.418471 kubelet[1411]: E1002 19:19:25.418447 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:19:26.311317 kubelet[1411]: E1002 19:19:26.311250 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:27.311641 kubelet[1411]: E1002 19:19:27.311587 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:28.312324 kubelet[1411]: E1002 19:19:28.312267 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:29.312462 kubelet[1411]: E1002 19:19:29.312386 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:30.313616 kubelet[1411]: E1002 19:19:30.313545 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:31.314791 kubelet[1411]: E1002 19:19:31.314730 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:32.315255 kubelet[1411]: E1002 19:19:32.315222 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:33.315592 kubelet[1411]: E1002 19:19:33.315552 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:34.316058 kubelet[1411]: E1002 19:19:34.316015 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:35.316169 kubelet[1411]: E1002 19:19:35.316126 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:36.316961 kubelet[1411]: E1002 19:19:36.316881 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:37.318038 kubelet[1411]: E1002 19:19:37.317986 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:38.318906 kubelet[1411]: E1002 19:19:38.318848 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:38.418206 kubelet[1411]: E1002 19:19:38.417958 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:38.422128 env[1101]: time="2023-10-02T19:19:38.422088726Z" level=info msg="CreateContainer within sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:19:38.433689 env[1101]: time="2023-10-02T19:19:38.433631585Z" level=info msg="CreateContainer within sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\"" Oct 2 19:19:38.434157 env[1101]: time="2023-10-02T19:19:38.434130123Z" level=info msg="StartContainer for \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\"" Oct 2 19:19:38.450539 systemd[1]: Started cri-containerd-a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d.scope. Oct 2 19:19:38.460746 systemd[1]: cri-containerd-a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d.scope: Deactivated successfully. Oct 2 19:19:38.460995 systemd[1]: Stopped cri-containerd-a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d.scope. Oct 2 19:19:38.463566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d-rootfs.mount: Deactivated successfully. Oct 2 19:19:38.470369 env[1101]: time="2023-10-02T19:19:38.470319537Z" level=info msg="shim disconnected" id=a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d Oct 2 19:19:38.470475 env[1101]: time="2023-10-02T19:19:38.470371525Z" level=warning msg="cleaning up after shim disconnected" id=a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d namespace=k8s.io Oct 2 19:19:38.470475 env[1101]: time="2023-10-02T19:19:38.470382906Z" level=info msg="cleaning up dead shim" Oct 2 19:19:38.483833 env[1101]: time="2023-10-02T19:19:38.483797457Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1862 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:38.484206 env[1101]: time="2023-10-02T19:19:38.484140362Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:19:38.484371 env[1101]: time="2023-10-02T19:19:38.484316403Z" level=error msg="Failed to pipe stdout of container \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\"" error="reading from a closed fifo" Oct 2 19:19:38.484424 env[1101]: time="2023-10-02T19:19:38.484399610Z" level=error msg="Failed to pipe stderr of container \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\"" error="reading from a closed fifo" Oct 2 19:19:38.487737 env[1101]: time="2023-10-02T19:19:38.487701574Z" level=error msg="StartContainer for \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:38.487901 kubelet[1411]: E1002 19:19:38.487859 1411 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d" Oct 2 19:19:38.488118 kubelet[1411]: E1002 19:19:38.487979 1411 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:38.488118 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:38.488118 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:19:38.488118 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-66xcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:38.488118 kubelet[1411]: E1002 19:19:38.488014 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:19:38.518080 kubelet[1411]: I1002 19:19:38.518042 1411 scope.go:117] "RemoveContainer" containerID="72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef" Oct 2 19:19:38.518437 kubelet[1411]: I1002 19:19:38.518411 1411 scope.go:117] "RemoveContainer" containerID="72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef" Oct 2 19:19:38.519325 env[1101]: time="2023-10-02T19:19:38.519284640Z" level=info msg="RemoveContainer for \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\"" Oct 2 19:19:38.519860 env[1101]: time="2023-10-02T19:19:38.519823554Z" level=info msg="RemoveContainer for \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\"" Oct 2 19:19:38.519977 env[1101]: time="2023-10-02T19:19:38.519917631Z" level=error msg="RemoveContainer for \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\" failed" error="failed to set removing state for container \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\": container is already in removing state" Oct 2 19:19:38.520117 kubelet[1411]: E1002 19:19:38.520095 1411 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\": container is already in removing state" containerID="72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef" Oct 2 19:19:38.520192 kubelet[1411]: E1002 19:19:38.520130 1411 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef": container is already in removing state; Skipping pod "cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)" Oct 2 19:19:38.520239 kubelet[1411]: E1002 19:19:38.520206 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:38.520488 kubelet[1411]: E1002 19:19:38.520470 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:19:38.523063 env[1101]: time="2023-10-02T19:19:38.523022063Z" level=info msg="RemoveContainer for \"72df7d7d56d4ea511a66316da1414eccb8adb312ebc0a3234cf0396995b626ef\" returns successfully" Oct 2 19:19:39.319751 kubelet[1411]: E1002 19:19:39.319691 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:40.285393 kubelet[1411]: E1002 19:19:40.285327 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:40.320723 kubelet[1411]: E1002 19:19:40.320675 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:41.321767 kubelet[1411]: E1002 19:19:41.321700 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:41.574637 kubelet[1411]: W1002 19:19:41.574510 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb36af421_0f93_45ab_a4ea_d3e88013f7f7.slice/cri-containerd-a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d.scope WatchSource:0}: task a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d not found: not found Oct 2 19:19:42.322858 kubelet[1411]: E1002 19:19:42.322776 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:43.323529 kubelet[1411]: E1002 19:19:43.323472 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:44.324230 kubelet[1411]: E1002 19:19:44.324186 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:45.325252 kubelet[1411]: E1002 19:19:45.325203 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:46.325960 kubelet[1411]: E1002 19:19:46.325876 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:47.326110 kubelet[1411]: E1002 19:19:47.326048 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:48.326491 kubelet[1411]: E1002 19:19:48.326425 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:49.326956 kubelet[1411]: E1002 19:19:49.326877 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:49.418279 kubelet[1411]: E1002 19:19:49.418200 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:49.418463 kubelet[1411]: E1002 19:19:49.418418 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:19:50.328079 kubelet[1411]: E1002 19:19:50.328012 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:51.328848 kubelet[1411]: E1002 19:19:51.328759 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:52.329715 kubelet[1411]: E1002 19:19:52.329684 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:53.330103 kubelet[1411]: E1002 19:19:53.330058 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:54.330704 kubelet[1411]: E1002 19:19:54.330638 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:55.331333 kubelet[1411]: E1002 19:19:55.331271 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:56.331443 kubelet[1411]: E1002 19:19:56.331392 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:57.331743 kubelet[1411]: E1002 19:19:57.331685 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:58.332555 kubelet[1411]: E1002 19:19:58.332497 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:59.333374 kubelet[1411]: E1002 19:19:59.333297 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:00.285094 kubelet[1411]: E1002 19:20:00.285024 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:00.334409 kubelet[1411]: E1002 19:20:00.334348 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:01.334960 kubelet[1411]: E1002 19:20:01.334850 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:02.335509 kubelet[1411]: E1002 19:20:02.335428 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:03.336648 kubelet[1411]: E1002 19:20:03.336579 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:03.418755 kubelet[1411]: E1002 19:20:03.418702 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:03.419022 kubelet[1411]: E1002 19:20:03.418970 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:20:04.336759 kubelet[1411]: E1002 19:20:04.336718 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:05.337544 kubelet[1411]: E1002 19:20:05.337494 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:06.338400 kubelet[1411]: E1002 19:20:06.338333 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:07.338824 kubelet[1411]: E1002 19:20:07.338781 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:08.339719 kubelet[1411]: E1002 19:20:08.339670 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:09.340099 kubelet[1411]: E1002 19:20:09.340044 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:10.341011 kubelet[1411]: E1002 19:20:10.340955 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:11.341696 kubelet[1411]: E1002 19:20:11.341648 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:12.342446 kubelet[1411]: E1002 19:20:12.342387 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:13.342568 kubelet[1411]: E1002 19:20:13.342511 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:14.343647 kubelet[1411]: E1002 19:20:14.343596 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:14.418060 kubelet[1411]: E1002 19:20:14.418011 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:14.418298 kubelet[1411]: E1002 19:20:14.418275 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:20:15.344763 kubelet[1411]: E1002 19:20:15.344658 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:16.345841 kubelet[1411]: E1002 19:20:16.345774 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:16.418185 kubelet[1411]: E1002 19:20:16.418145 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:17.346111 kubelet[1411]: E1002 19:20:17.346040 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:18.346450 kubelet[1411]: E1002 19:20:18.346412 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:19.347534 kubelet[1411]: E1002 19:20:19.347468 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:20.285502 kubelet[1411]: E1002 19:20:20.285437 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:20.347922 kubelet[1411]: E1002 19:20:20.347862 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:21.348911 kubelet[1411]: E1002 19:20:21.348863 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:22.349915 kubelet[1411]: E1002 19:20:22.349818 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:23.350827 kubelet[1411]: E1002 19:20:23.350765 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:24.351237 kubelet[1411]: E1002 19:20:24.351171 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:25.351749 kubelet[1411]: E1002 19:20:25.351694 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:26.352874 kubelet[1411]: E1002 19:20:26.352806 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:26.417702 kubelet[1411]: E1002 19:20:26.417673 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:26.419329 env[1101]: time="2023-10-02T19:20:26.419289643Z" level=info msg="CreateContainer within sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:20:26.807980 env[1101]: time="2023-10-02T19:20:26.807776729Z" level=info msg="CreateContainer within sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739\"" Oct 2 19:20:26.808569 env[1101]: time="2023-10-02T19:20:26.808503542Z" level=info msg="StartContainer for \"f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739\"" Oct 2 19:20:26.824277 systemd[1]: Started cri-containerd-f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739.scope. Oct 2 19:20:26.832081 systemd[1]: cri-containerd-f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739.scope: Deactivated successfully. Oct 2 19:20:26.832328 systemd[1]: Stopped cri-containerd-f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739.scope. Oct 2 19:20:26.834736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739-rootfs.mount: Deactivated successfully. Oct 2 19:20:26.846411 env[1101]: time="2023-10-02T19:20:26.846359583Z" level=info msg="shim disconnected" id=f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739 Oct 2 19:20:26.846599 env[1101]: time="2023-10-02T19:20:26.846429026Z" level=warning msg="cleaning up after shim disconnected" id=f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739 namespace=k8s.io Oct 2 19:20:26.846599 env[1101]: time="2023-10-02T19:20:26.846438705Z" level=info msg="cleaning up dead shim" Oct 2 19:20:26.854382 env[1101]: time="2023-10-02T19:20:26.854315122Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:20:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1903 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:20:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:20:26.854699 env[1101]: time="2023-10-02T19:20:26.854624674Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:20:26.857044 env[1101]: time="2023-10-02T19:20:26.856994615Z" level=error msg="Failed to pipe stderr of container \"f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739\"" error="reading from a closed fifo" Oct 2 19:20:26.857147 env[1101]: time="2023-10-02T19:20:26.857003391Z" level=error msg="Failed to pipe stdout of container \"f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739\"" error="reading from a closed fifo" Oct 2 19:20:27.087352 env[1101]: time="2023-10-02T19:20:27.087268302Z" level=error msg="StartContainer for \"f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:20:27.087550 kubelet[1411]: E1002 19:20:27.087466 1411 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739" Oct 2 19:20:27.087621 kubelet[1411]: E1002 19:20:27.087600 1411 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:20:27.087621 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:20:27.087621 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:20:27.087621 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-66xcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:20:27.087765 kubelet[1411]: E1002 19:20:27.087634 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:20:27.353817 kubelet[1411]: E1002 19:20:27.353663 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:27.588769 kubelet[1411]: I1002 19:20:27.588734 1411 scope.go:117] "RemoveContainer" containerID="a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d" Oct 2 19:20:27.589113 kubelet[1411]: I1002 19:20:27.589079 1411 scope.go:117] "RemoveContainer" containerID="a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d" Oct 2 19:20:27.589690 env[1101]: time="2023-10-02T19:20:27.589657923Z" level=info msg="RemoveContainer for \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\"" Oct 2 19:20:27.590089 env[1101]: time="2023-10-02T19:20:27.590059392Z" level=info msg="RemoveContainer for \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\"" Oct 2 19:20:27.590164 env[1101]: time="2023-10-02T19:20:27.590137531Z" level=error msg="RemoveContainer for \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\" failed" error="failed to set removing state for container \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\": container is already in removing state" Oct 2 19:20:27.590285 kubelet[1411]: E1002 19:20:27.590267 1411 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\": container is already in removing state" containerID="a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d" Oct 2 19:20:27.590358 kubelet[1411]: E1002 19:20:27.590295 1411 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d": container is already in removing state; Skipping pod "cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)" Oct 2 19:20:27.590358 kubelet[1411]: E1002 19:20:27.590356 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:27.590566 kubelet[1411]: E1002 19:20:27.590550 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:20:27.710058 env[1101]: time="2023-10-02T19:20:27.709919654Z" level=info msg="RemoveContainer for \"a938c4dfb19b0a4b7c8915a452159d9af02c76bdc627cdd563d20c79a249cc1d\" returns successfully" Oct 2 19:20:28.354413 kubelet[1411]: E1002 19:20:28.354377 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:29.355319 kubelet[1411]: E1002 19:20:29.355272 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:29.952092 kubelet[1411]: W1002 19:20:29.952043 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb36af421_0f93_45ab_a4ea_d3e88013f7f7.slice/cri-containerd-f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739.scope WatchSource:0}: task f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739 not found: not found Oct 2 19:20:30.356368 kubelet[1411]: E1002 19:20:30.356338 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:31.357002 kubelet[1411]: E1002 19:20:31.356958 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:32.357760 kubelet[1411]: E1002 19:20:32.357701 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:33.358700 kubelet[1411]: E1002 19:20:33.358643 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:34.359030 kubelet[1411]: E1002 19:20:34.358989 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:35.359606 kubelet[1411]: E1002 19:20:35.359515 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:36.360266 kubelet[1411]: E1002 19:20:36.360215 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:37.360456 kubelet[1411]: E1002 19:20:37.360425 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:38.361293 kubelet[1411]: E1002 19:20:38.361221 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:39.361769 kubelet[1411]: E1002 19:20:39.361692 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:40.284981 kubelet[1411]: E1002 19:20:40.284916 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:40.306824 kubelet[1411]: E1002 19:20:40.306756 1411 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:20:40.362351 kubelet[1411]: E1002 19:20:40.362290 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:40.418703 kubelet[1411]: E1002 19:20:40.418652 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:40.418952 kubelet[1411]: E1002 19:20:40.418916 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:20:40.645663 kubelet[1411]: E1002 19:20:40.645627 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:41.363335 kubelet[1411]: E1002 19:20:41.363241 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:42.364092 kubelet[1411]: E1002 19:20:42.364038 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:43.364461 kubelet[1411]: E1002 19:20:43.364381 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:44.365217 kubelet[1411]: E1002 19:20:44.365125 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:45.366300 kubelet[1411]: E1002 19:20:45.366241 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:45.646744 kubelet[1411]: E1002 19:20:45.646624 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:46.366958 kubelet[1411]: E1002 19:20:46.366872 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:47.367095 kubelet[1411]: E1002 19:20:47.367030 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:48.368066 kubelet[1411]: E1002 19:20:48.368026 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:49.369032 kubelet[1411]: E1002 19:20:49.368927 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:50.370196 kubelet[1411]: E1002 19:20:50.370139 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:50.648021 kubelet[1411]: E1002 19:20:50.647885 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:51.370601 kubelet[1411]: E1002 19:20:51.370534 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:51.418594 kubelet[1411]: E1002 19:20:51.418546 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:51.418804 kubelet[1411]: E1002 19:20:51.418780 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:20:52.371594 kubelet[1411]: E1002 19:20:52.371517 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:53.372534 kubelet[1411]: E1002 19:20:53.372412 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:54.372665 kubelet[1411]: E1002 19:20:54.372591 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:55.373713 kubelet[1411]: E1002 19:20:55.373651 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:55.649106 kubelet[1411]: E1002 19:20:55.649001 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:56.374751 kubelet[1411]: E1002 19:20:56.374685 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:57.378965 kubelet[1411]: E1002 19:20:57.378871 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:58.379280 kubelet[1411]: E1002 19:20:58.379213 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:59.379881 kubelet[1411]: E1002 19:20:59.379810 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:00.284685 kubelet[1411]: E1002 19:21:00.284642 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:00.380468 kubelet[1411]: E1002 19:21:00.380390 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:00.650045 kubelet[1411]: E1002 19:21:00.650004 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:01.381102 kubelet[1411]: E1002 19:21:01.381038 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:02.381641 kubelet[1411]: E1002 19:21:02.381582 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:03.382582 kubelet[1411]: E1002 19:21:03.382514 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:04.383316 kubelet[1411]: E1002 19:21:04.383258 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:04.921067 update_engine[1094]: I1002 19:21:04.920987 1094 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:21:04.921067 update_engine[1094]: I1002 19:21:04.921034 1094 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:21:04.921552 update_engine[1094]: I1002 19:21:04.921422 1094 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:21:04.921806 update_engine[1094]: I1002 19:21:04.921783 1094 omaha_request_params.cc:62] Current group set to lts Oct 2 19:21:04.921988 update_engine[1094]: I1002 19:21:04.921929 1094 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:21:04.921988 update_engine[1094]: I1002 19:21:04.921962 1094 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:21:04.921988 update_engine[1094]: I1002 19:21:04.921978 1094 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:21:04.921988 update_engine[1094]: I1002 19:21:04.922005 1094 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:21:04.922279 update_engine[1094]: I1002 19:21:04.922053 1094 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:21:04.922279 update_engine[1094]: I1002 19:21:04.922076 1094 omaha_request_action.cc:269] Request: Oct 2 19:21:04.922279 update_engine[1094]: Oct 2 19:21:04.922279 update_engine[1094]: Oct 2 19:21:04.922279 update_engine[1094]: Oct 2 19:21:04.922279 update_engine[1094]: Oct 2 19:21:04.922279 update_engine[1094]: Oct 2 19:21:04.922279 update_engine[1094]: Oct 2 19:21:04.922279 update_engine[1094]: Oct 2 19:21:04.922279 update_engine[1094]: Oct 2 19:21:04.922279 update_engine[1094]: I1002 19:21:04.922082 1094 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:21:04.922607 locksmithd[1133]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:21:04.923089 update_engine[1094]: I1002 19:21:04.923060 1094 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:21:04.923273 update_engine[1094]: I1002 19:21:04.923249 1094 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:21:05.383706 kubelet[1411]: E1002 19:21:05.383678 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:05.651463 kubelet[1411]: E1002 19:21:05.651366 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:06.072837 update_engine[1094]: I1002 19:21:06.072798 1094 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:21:06.073130 update_engine[1094]: I1002 19:21:06.072973 1094 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:21:06.073130 update_engine[1094]: I1002 19:21:06.073052 1094 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:21:06.384161 kubelet[1411]: E1002 19:21:06.384033 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:06.417811 kubelet[1411]: E1002 19:21:06.417774 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:06.418021 kubelet[1411]: E1002 19:21:06.418006 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:21:06.424345 update_engine[1094]: I1002 19:21:06.424315 1094 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:21:06.425416 update_engine[1094]: I1002 19:21:06.425400 1094 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:21:06.425416 update_engine[1094]: I1002 19:21:06.425410 1094 omaha_request_action.cc:619] Omaha request response: Oct 2 19:21:06.425416 update_engine[1094]: Oct 2 19:21:06.429495 update_engine[1094]: I1002 19:21:06.429481 1094 omaha_request_action.cc:409] No update. Oct 2 19:21:06.429530 update_engine[1094]: I1002 19:21:06.429495 1094 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:21:06.429530 update_engine[1094]: I1002 19:21:06.429499 1094 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:21:06.429530 update_engine[1094]: I1002 19:21:06.429502 1094 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:21:06.429530 update_engine[1094]: I1002 19:21:06.429505 1094 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:21:06.429530 update_engine[1094]: I1002 19:21:06.429507 1094 update_attempter.cc:302] Processing Done. Oct 2 19:21:06.429530 update_engine[1094]: I1002 19:21:06.429517 1094 update_attempter.cc:338] No update. Oct 2 19:21:06.429530 update_engine[1094]: I1002 19:21:06.429525 1094 update_check_scheduler.cc:74] Next update check in 47m10s Oct 2 19:21:06.429892 locksmithd[1133]: LastCheckedTime=1696274466 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:21:07.385036 kubelet[1411]: E1002 19:21:07.384993 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:08.386000 kubelet[1411]: E1002 19:21:08.385930 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:09.386660 kubelet[1411]: E1002 19:21:09.386585 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:10.387706 kubelet[1411]: E1002 19:21:10.387653 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:10.652584 kubelet[1411]: E1002 19:21:10.652460 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:11.387801 kubelet[1411]: E1002 19:21:11.387750 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:12.388225 kubelet[1411]: E1002 19:21:12.388144 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:13.388783 kubelet[1411]: E1002 19:21:13.388714 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:14.389686 kubelet[1411]: E1002 19:21:14.389575 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:15.389790 kubelet[1411]: E1002 19:21:15.389720 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:15.653683 kubelet[1411]: E1002 19:21:15.653594 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:16.390662 kubelet[1411]: E1002 19:21:16.390606 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:17.391081 kubelet[1411]: E1002 19:21:17.391027 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:17.418691 kubelet[1411]: E1002 19:21:17.418637 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:17.418892 kubelet[1411]: E1002 19:21:17.418877 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:21:18.391350 kubelet[1411]: E1002 19:21:18.391297 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:19.391507 kubelet[1411]: E1002 19:21:19.391439 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:20.285482 kubelet[1411]: E1002 19:21:20.285424 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:20.392284 kubelet[1411]: E1002 19:21:20.392224 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:20.654170 kubelet[1411]: E1002 19:21:20.654141 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:21.393015 kubelet[1411]: E1002 19:21:21.392961 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:22.394033 kubelet[1411]: E1002 19:21:22.393978 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:23.395188 kubelet[1411]: E1002 19:21:23.395120 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:24.395502 kubelet[1411]: E1002 19:21:24.395449 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:25.396578 kubelet[1411]: E1002 19:21:25.396513 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:25.418249 kubelet[1411]: E1002 19:21:25.418215 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:25.655448 kubelet[1411]: E1002 19:21:25.655347 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:26.397482 kubelet[1411]: E1002 19:21:26.397432 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:27.398329 kubelet[1411]: E1002 19:21:27.398266 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:28.398964 kubelet[1411]: E1002 19:21:28.398906 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:29.399089 kubelet[1411]: E1002 19:21:29.399013 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:29.417663 kubelet[1411]: E1002 19:21:29.417621 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:29.417878 kubelet[1411]: E1002 19:21:29.417818 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:21:30.399463 kubelet[1411]: E1002 19:21:30.399391 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:30.656082 kubelet[1411]: E1002 19:21:30.655975 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:31.399839 kubelet[1411]: E1002 19:21:31.399730 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:32.400012 kubelet[1411]: E1002 19:21:32.399958 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:33.401028 kubelet[1411]: E1002 19:21:33.400961 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:34.402150 kubelet[1411]: E1002 19:21:34.402101 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:35.403119 kubelet[1411]: E1002 19:21:35.403061 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:35.657172 kubelet[1411]: E1002 19:21:35.657056 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:36.403489 kubelet[1411]: E1002 19:21:36.403427 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:37.403858 kubelet[1411]: E1002 19:21:37.403798 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:38.404773 kubelet[1411]: E1002 19:21:38.404707 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:39.405697 kubelet[1411]: E1002 19:21:39.405642 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:40.285343 kubelet[1411]: E1002 19:21:40.285291 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:40.406193 kubelet[1411]: E1002 19:21:40.406115 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:40.657688 kubelet[1411]: E1002 19:21:40.657655 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:41.406958 kubelet[1411]: E1002 19:21:41.406880 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:41.418643 kubelet[1411]: E1002 19:21:41.418604 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:41.418898 kubelet[1411]: E1002 19:21:41.418875 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-6mz8p_kube-system(b36af421-0f93-45ab-a4ea-d3e88013f7f7)\"" pod="kube-system/cilium-6mz8p" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" Oct 2 19:21:42.407227 kubelet[1411]: E1002 19:21:42.407155 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:43.407587 kubelet[1411]: E1002 19:21:43.407533 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:44.408599 kubelet[1411]: E1002 19:21:44.408543 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:45.409368 kubelet[1411]: E1002 19:21:45.409309 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:45.659205 kubelet[1411]: E1002 19:21:45.659171 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:46.410411 kubelet[1411]: E1002 19:21:46.410321 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:47.411033 kubelet[1411]: E1002 19:21:47.410969 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:47.967049 env[1101]: time="2023-10-02T19:21:47.967004234Z" level=info msg="StopPodSandbox for \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\"" Oct 2 19:21:47.967470 env[1101]: time="2023-10-02T19:21:47.967062012Z" level=info msg="Container to stop \"f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:21:47.968423 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a-shm.mount: Deactivated successfully. Oct 2 19:21:47.971000 audit: BPF prog-id=64 op=UNLOAD Oct 2 19:21:47.972039 systemd[1]: cri-containerd-177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a.scope: Deactivated successfully. Oct 2 19:21:47.972979 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 2 19:21:47.973041 kernel: audit: type=1334 audit(1696274507.971:643): prog-id=64 op=UNLOAD Oct 2 19:21:47.975000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:21:47.977974 kernel: audit: type=1334 audit(1696274507.975:644): prog-id=68 op=UNLOAD Oct 2 19:21:47.986890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a-rootfs.mount: Deactivated successfully. Oct 2 19:21:47.992272 env[1101]: time="2023-10-02T19:21:47.992199527Z" level=info msg="shim disconnected" id=177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a Oct 2 19:21:47.992272 env[1101]: time="2023-10-02T19:21:47.992252256Z" level=warning msg="cleaning up after shim disconnected" id=177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a namespace=k8s.io Oct 2 19:21:47.992272 env[1101]: time="2023-10-02T19:21:47.992264499Z" level=info msg="cleaning up dead shim" Oct 2 19:21:47.998586 env[1101]: time="2023-10-02T19:21:47.998543906Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:21:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1941 runtime=io.containerd.runc.v2\n" Oct 2 19:21:47.998895 env[1101]: time="2023-10-02T19:21:47.998860673Z" level=info msg="TearDown network for sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" successfully" Oct 2 19:21:47.998895 env[1101]: time="2023-10-02T19:21:47.998883726Z" level=info msg="StopPodSandbox for \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" returns successfully" Oct 2 19:21:48.099972 kubelet[1411]: I1002 19:21:48.099899 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:48.099972 kubelet[1411]: I1002 19:21:48.099925 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-host-proc-sys-net\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100027 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-xtables-lock\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100045 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-lib-modules\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100062 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-hostproc\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100076 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-etc-cni-netd\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100097 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-host-proc-sys-kernel\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100115 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100123 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66xcv\" (UniqueName: \"kubernetes.io/projected/b36af421-0f93-45ab-a4ea-d3e88013f7f7-kube-api-access-66xcv\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100135 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100159 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100170 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-config-path\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100173 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100194 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-hostproc" (OuterVolumeSpecName: "hostproc") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100200 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b36af421-0f93-45ab-a4ea-d3e88013f7f7-clustermesh-secrets\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100213 kubelet[1411]: I1002 19:21:48.100223 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cni-path\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100245 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-cgroup\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100266 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-bpf-maps\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100291 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b36af421-0f93-45ab-a4ea-d3e88013f7f7-hubble-tls\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100313 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-run\") pod \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\" (UID: \"b36af421-0f93-45ab-a4ea-d3e88013f7f7\") " Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100343 1411 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-host-proc-sys-kernel\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100357 1411 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-hostproc\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100369 1411 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-etc-cni-netd\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100381 1411 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-host-proc-sys-net\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100393 1411 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-xtables-lock\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100404 1411 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-lib-modules\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100422 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100442 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cni-path" (OuterVolumeSpecName: "cni-path") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100463 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:48.100668 kubelet[1411]: I1002 19:21:48.100483 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:48.102615 kubelet[1411]: I1002 19:21:48.102574 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36af421-0f93-45ab-a4ea-d3e88013f7f7-kube-api-access-66xcv" (OuterVolumeSpecName: "kube-api-access-66xcv") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "kube-api-access-66xcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:21:48.103455 kubelet[1411]: I1002 19:21:48.103411 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36af421-0f93-45ab-a4ea-d3e88013f7f7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:21:48.103455 kubelet[1411]: I1002 19:21:48.103412 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36af421-0f93-45ab-a4ea-d3e88013f7f7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:21:48.103559 kubelet[1411]: I1002 19:21:48.103508 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b36af421-0f93-45ab-a4ea-d3e88013f7f7" (UID: "b36af421-0f93-45ab-a4ea-d3e88013f7f7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:21:48.103863 systemd[1]: var-lib-kubelet-pods-b36af421\x2d0f93\x2d45ab\x2da4ea\x2dd3e88013f7f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d66xcv.mount: Deactivated successfully. Oct 2 19:21:48.105077 systemd[1]: var-lib-kubelet-pods-b36af421\x2d0f93\x2d45ab\x2da4ea\x2dd3e88013f7f7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:21:48.105142 systemd[1]: var-lib-kubelet-pods-b36af421\x2d0f93\x2d45ab\x2da4ea\x2dd3e88013f7f7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:21:48.200879 kubelet[1411]: I1002 19:21:48.200820 1411 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-bpf-maps\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.200879 kubelet[1411]: I1002 19:21:48.200872 1411 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b36af421-0f93-45ab-a4ea-d3e88013f7f7-hubble-tls\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.200879 kubelet[1411]: I1002 19:21:48.200885 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-run\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.200879 kubelet[1411]: I1002 19:21:48.200893 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-cgroup\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.200879 kubelet[1411]: I1002 19:21:48.200904 1411 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-66xcv\" (UniqueName: \"kubernetes.io/projected/b36af421-0f93-45ab-a4ea-d3e88013f7f7-kube-api-access-66xcv\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.201178 kubelet[1411]: I1002 19:21:48.200913 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cilium-config-path\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.201178 kubelet[1411]: I1002 19:21:48.200922 1411 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b36af421-0f93-45ab-a4ea-d3e88013f7f7-clustermesh-secrets\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.201178 kubelet[1411]: I1002 19:21:48.200930 1411 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b36af421-0f93-45ab-a4ea-d3e88013f7f7-cni-path\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:21:48.412115 kubelet[1411]: E1002 19:21:48.412076 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:48.421784 systemd[1]: Removed slice kubepods-burstable-podb36af421_0f93_45ab_a4ea_d3e88013f7f7.slice. Oct 2 19:21:48.710737 kubelet[1411]: I1002 19:21:48.710618 1411 scope.go:117] "RemoveContainer" containerID="f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739" Oct 2 19:21:48.711808 env[1101]: time="2023-10-02T19:21:48.711774383Z" level=info msg="RemoveContainer for \"f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739\"" Oct 2 19:21:48.714656 env[1101]: time="2023-10-02T19:21:48.714619605Z" level=info msg="RemoveContainer for \"f84d9d5350aa2addaf117dd67999ef80f3c7faa6ff007fc8c02189766a43c739\" returns successfully" Oct 2 19:21:49.412745 kubelet[1411]: E1002 19:21:49.412678 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:50.413635 kubelet[1411]: E1002 19:21:50.413586 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:50.419999 kubelet[1411]: I1002 19:21:50.419978 1411 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" path="/var/lib/kubelet/pods/b36af421-0f93-45ab-a4ea-d3e88013f7f7/volumes" Oct 2 19:21:50.499302 kubelet[1411]: I1002 19:21:50.499249 1411 topology_manager.go:215] "Topology Admit Handler" podUID="3341abbd-5c7e-482b-a180-c6399f0955c6" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-ddvvb" Oct 2 19:21:50.499302 kubelet[1411]: E1002 19:21:50.499309 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" containerName="mount-cgroup" Oct 2 19:21:50.499302 kubelet[1411]: E1002 19:21:50.499321 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" containerName="mount-cgroup" Oct 2 19:21:50.499532 kubelet[1411]: E1002 19:21:50.499330 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" containerName="mount-cgroup" Oct 2 19:21:50.499532 kubelet[1411]: I1002 19:21:50.499350 1411 memory_manager.go:346] "RemoveStaleState removing state" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" containerName="mount-cgroup" Oct 2 19:21:50.499532 kubelet[1411]: I1002 19:21:50.499360 1411 memory_manager.go:346] "RemoveStaleState removing state" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" containerName="mount-cgroup" Oct 2 19:21:50.499532 kubelet[1411]: I1002 19:21:50.499368 1411 memory_manager.go:346] "RemoveStaleState removing state" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" containerName="mount-cgroup" Oct 2 19:21:50.501815 kubelet[1411]: W1002 19:21:50.501720 1411 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.130" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.130' and this object Oct 2 19:21:50.501815 kubelet[1411]: E1002 19:21:50.501754 1411 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.130" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.130' and this object Oct 2 19:21:50.504395 systemd[1]: Created slice kubepods-besteffort-pod3341abbd_5c7e_482b_a180_c6399f0955c6.slice. Oct 2 19:21:50.506396 kubelet[1411]: I1002 19:21:50.506362 1411 topology_manager.go:215] "Topology Admit Handler" podUID="11e5ca46-774e-4fc5-a1d6-8e3983af52a7" podNamespace="kube-system" podName="cilium-h6kbg" Oct 2 19:21:50.506463 kubelet[1411]: E1002 19:21:50.506416 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" containerName="mount-cgroup" Oct 2 19:21:50.506463 kubelet[1411]: I1002 19:21:50.506441 1411 memory_manager.go:346] "RemoveStaleState removing state" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" containerName="mount-cgroup" Oct 2 19:21:50.506463 kubelet[1411]: I1002 19:21:50.506450 1411 memory_manager.go:346] "RemoveStaleState removing state" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" containerName="mount-cgroup" Oct 2 19:21:50.506531 kubelet[1411]: E1002 19:21:50.506467 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b36af421-0f93-45ab-a4ea-d3e88013f7f7" containerName="mount-cgroup" Oct 2 19:21:50.511098 systemd[1]: Created slice kubepods-burstable-pod11e5ca46_774e_4fc5_a1d6_8e3983af52a7.slice. Oct 2 19:21:50.513713 kubelet[1411]: I1002 19:21:50.513659 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxxg8\" (UniqueName: \"kubernetes.io/projected/3341abbd-5c7e-482b-a180-c6399f0955c6-kube-api-access-dxxg8\") pod \"cilium-operator-6bc8ccdb58-ddvvb\" (UID: \"3341abbd-5c7e-482b-a180-c6399f0955c6\") " pod="kube-system/cilium-operator-6bc8ccdb58-ddvvb" Oct 2 19:21:50.513713 kubelet[1411]: I1002 19:21:50.513700 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3341abbd-5c7e-482b-a180-c6399f0955c6-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-ddvvb\" (UID: \"3341abbd-5c7e-482b-a180-c6399f0955c6\") " pod="kube-system/cilium-operator-6bc8ccdb58-ddvvb" Oct 2 19:21:50.614419 kubelet[1411]: I1002 19:21:50.614371 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-cgroup\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.614419 kubelet[1411]: I1002 19:21:50.614430 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-lib-modules\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.614636 kubelet[1411]: I1002 19:21:50.614464 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-host-proc-sys-kernel\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.614636 kubelet[1411]: I1002 19:21:50.614547 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cni-path\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.614697 kubelet[1411]: I1002 19:21:50.614647 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-host-proc-sys-net\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.614752 kubelet[1411]: I1002 19:21:50.614724 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-hubble-tls\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.614977 kubelet[1411]: I1002 19:21:50.614780 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-run\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.614977 kubelet[1411]: I1002 19:21:50.614811 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-bpf-maps\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.614977 kubelet[1411]: I1002 19:21:50.614858 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-hostproc\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.614977 kubelet[1411]: I1002 19:21:50.614915 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-etc-cni-netd\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.615134 kubelet[1411]: I1002 19:21:50.615013 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-clustermesh-secrets\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.615134 kubelet[1411]: I1002 19:21:50.615074 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mgd\" (UniqueName: \"kubernetes.io/projected/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-kube-api-access-78mgd\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.615134 kubelet[1411]: I1002 19:21:50.615097 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-ipsec-secrets\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.615244 kubelet[1411]: I1002 19:21:50.615186 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-xtables-lock\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.615318 kubelet[1411]: I1002 19:21:50.615303 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-config-path\") pod \"cilium-h6kbg\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " pod="kube-system/cilium-h6kbg" Oct 2 19:21:50.660216 kubelet[1411]: E1002 19:21:50.660190 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:51.406698 kubelet[1411]: E1002 19:21:51.406651 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:51.407185 env[1101]: time="2023-10-02T19:21:51.407150671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-ddvvb,Uid:3341abbd-5c7e-482b-a180-c6399f0955c6,Namespace:kube-system,Attempt:0,}" Oct 2 19:21:51.414254 kubelet[1411]: E1002 19:21:51.414225 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:51.418453 env[1101]: time="2023-10-02T19:21:51.418385475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:21:51.418453 env[1101]: time="2023-10-02T19:21:51.418424058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:21:51.418453 env[1101]: time="2023-10-02T19:21:51.418436692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:21:51.418605 env[1101]: time="2023-10-02T19:21:51.418548603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746 pid=1969 runtime=io.containerd.runc.v2 Oct 2 19:21:51.420278 kubelet[1411]: E1002 19:21:51.420252 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:51.420717 env[1101]: time="2023-10-02T19:21:51.420683566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h6kbg,Uid:11e5ca46-774e-4fc5-a1d6-8e3983af52a7,Namespace:kube-system,Attempt:0,}" Oct 2 19:21:51.428052 systemd[1]: Started cri-containerd-75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746.scope. Oct 2 19:21:51.434006 env[1101]: time="2023-10-02T19:21:51.433943997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:21:51.434170 env[1101]: time="2023-10-02T19:21:51.433977841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:21:51.434170 env[1101]: time="2023-10-02T19:21:51.433987248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:21:51.434264 env[1101]: time="2023-10-02T19:21:51.434133555Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959 pid=2001 runtime=io.containerd.runc.v2 Oct 2 19:21:51.447923 kernel: audit: type=1400 audit(1696274511.435:645): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451066 kernel: audit: type=1400 audit(1696274511.435:646): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451089 kernel: audit: type=1400 audit(1696274511.435:647): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451104 kernel: audit: type=1400 audit(1696274511.435:648): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451161 kernel: audit: type=1400 audit(1696274511.435:649): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451175 kernel: audit: type=1400 audit(1696274511.435:650): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451191 kernel: audit: type=1400 audit(1696274511.435:651): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451205 kernel: audit: type=1400 audit(1696274511.435:652): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.444250 systemd[1]: Started cri-containerd-28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959.scope. Oct 2 19:21:51.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.437000 audit: BPF prog-id=75 op=LOAD Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1969 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:51.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303335616139653665313264353636333234616532333838336133 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1969 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:51.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303335616139653665313264353636333234616532333838336133 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.438000 audit: BPF prog-id=76 op=LOAD Oct 2 19:21:51.438000 audit[1979]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c000298f90 items=0 ppid=1969 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:51.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303335616139653665313264353636333234616532333838336133 Oct 2 19:21:51.443000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.443000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.443000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.443000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.443000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.443000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.443000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.443000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.443000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.443000 audit: BPF prog-id=77 op=LOAD Oct 2 19:21:51.443000 audit[1979]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c000298fd8 items=0 ppid=1969 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:51.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303335616139653665313264353636333234616532333838336133 Oct 2 19:21:51.446000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:21:51.447000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:21:51.447000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.447000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.447000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.447000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.447000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.447000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.447000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.447000 audit[1979]: AVC avc: denied { perfmon } for pid=1979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.447000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.447000 audit[1979]: AVC avc: denied { bpf } for pid=1979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.447000 audit: BPF prog-id=78 op=LOAD Oct 2 19:21:51.447000 audit[1979]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0002993e8 items=0 ppid=1969 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:51.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303335616139653665313264353636333234616532333838336133 Oct 2 19:21:51.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit: BPF prog-id=79 op=LOAD Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2001 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:51.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623030353835656433336134386438306333306238306334316137 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2001 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:51.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623030353835656433336134386438306333306238306334316137 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit: BPF prog-id=80 op=LOAD Oct 2 19:21:51.451000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002c29c0 items=0 ppid=2001 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:51.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623030353835656433336134386438306333306238306334316137 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit: BPF prog-id=81 op=LOAD Oct 2 19:21:51.451000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0002c2a08 items=0 ppid=2001 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:51.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623030353835656433336134386438306333306238306334316137 Oct 2 19:21:51.451000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:21:51.451000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:51.451000 audit: BPF prog-id=82 op=LOAD Oct 2 19:21:51.451000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0002c2e18 items=0 ppid=2001 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:51.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623030353835656433336134386438306333306238306334316137 Oct 2 19:21:51.460927 env[1101]: time="2023-10-02T19:21:51.460872789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h6kbg,Uid:11e5ca46-774e-4fc5-a1d6-8e3983af52a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\"" Oct 2 19:21:51.461600 kubelet[1411]: E1002 19:21:51.461566 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:51.463336 env[1101]: time="2023-10-02T19:21:51.463296738Z" level=info msg="CreateContainer within sandbox \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:21:51.476822 env[1101]: time="2023-10-02T19:21:51.476777674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-ddvvb,Uid:3341abbd-5c7e-482b-a180-c6399f0955c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746\"" Oct 2 19:21:51.477293 kubelet[1411]: E1002 19:21:51.477272 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:51.477912 env[1101]: time="2023-10-02T19:21:51.477871495Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:21:51.478206 env[1101]: time="2023-10-02T19:21:51.478183093Z" level=info msg="CreateContainer within sandbox \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\"" Oct 2 19:21:51.478453 env[1101]: time="2023-10-02T19:21:51.478403478Z" level=info msg="StartContainer for \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\"" Oct 2 19:21:51.490793 systemd[1]: Started cri-containerd-21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294.scope. Oct 2 19:21:51.498717 systemd[1]: cri-containerd-21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294.scope: Deactivated successfully. Oct 2 19:21:51.498948 systemd[1]: Stopped cri-containerd-21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294.scope. Oct 2 19:21:51.516244 env[1101]: time="2023-10-02T19:21:51.516174894Z" level=info msg="shim disconnected" id=21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294 Oct 2 19:21:51.516244 env[1101]: time="2023-10-02T19:21:51.516233926Z" level=warning msg="cleaning up after shim disconnected" id=21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294 namespace=k8s.io Oct 2 19:21:51.516244 env[1101]: time="2023-10-02T19:21:51.516245367Z" level=info msg="cleaning up dead shim" Oct 2 19:21:51.522770 env[1101]: time="2023-10-02T19:21:51.522731190Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:21:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2065 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:21:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:21:51.523125 env[1101]: time="2023-10-02T19:21:51.523058116Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 2 19:21:51.527059 env[1101]: time="2023-10-02T19:21:51.527001929Z" level=error msg="Failed to pipe stdout of container \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\"" error="reading from a closed fifo" Oct 2 19:21:51.527126 env[1101]: time="2023-10-02T19:21:51.527027628Z" level=error msg="Failed to pipe stderr of container \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\"" error="reading from a closed fifo" Oct 2 19:21:51.529382 env[1101]: time="2023-10-02T19:21:51.529340537Z" level=error msg="StartContainer for \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:21:51.529596 kubelet[1411]: E1002 19:21:51.529564 1411 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294" Oct 2 19:21:51.529697 kubelet[1411]: E1002 19:21:51.529669 1411 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:21:51.529697 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:21:51.529697 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:21:51.529697 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-78mgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:21:51.529901 kubelet[1411]: E1002 19:21:51.529710 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-h6kbg" podUID="11e5ca46-774e-4fc5-a1d6-8e3983af52a7" Oct 2 19:21:51.720172 kubelet[1411]: E1002 19:21:51.718925 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:51.726269 env[1101]: time="2023-10-02T19:21:51.726215469Z" level=info msg="CreateContainer within sandbox \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:21:51.741554 env[1101]: time="2023-10-02T19:21:51.741501648Z" level=info msg="CreateContainer within sandbox \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\"" Oct 2 19:21:51.742045 env[1101]: time="2023-10-02T19:21:51.742025506Z" level=info msg="StartContainer for \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\"" Oct 2 19:21:51.756761 systemd[1]: Started cri-containerd-e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd.scope. Oct 2 19:21:51.764633 systemd[1]: cri-containerd-e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd.scope: Deactivated successfully. Oct 2 19:21:51.764924 systemd[1]: Stopped cri-containerd-e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd.scope. Oct 2 19:21:51.772773 env[1101]: time="2023-10-02T19:21:51.772708282Z" level=info msg="shim disconnected" id=e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd Oct 2 19:21:51.772967 env[1101]: time="2023-10-02T19:21:51.772772362Z" level=warning msg="cleaning up after shim disconnected" id=e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd namespace=k8s.io Oct 2 19:21:51.772967 env[1101]: time="2023-10-02T19:21:51.772786519Z" level=info msg="cleaning up dead shim" Oct 2 19:21:51.778247 env[1101]: time="2023-10-02T19:21:51.778216222Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:21:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2100 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:21:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:21:51.778518 env[1101]: time="2023-10-02T19:21:51.778473447Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 2 19:21:51.783055 env[1101]: time="2023-10-02T19:21:51.782995359Z" level=error msg="Failed to pipe stdout of container \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\"" error="reading from a closed fifo" Oct 2 19:21:51.783149 env[1101]: time="2023-10-02T19:21:51.783029644Z" level=error msg="Failed to pipe stderr of container \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\"" error="reading from a closed fifo" Oct 2 19:21:51.785225 env[1101]: time="2023-10-02T19:21:51.785174507Z" level=error msg="StartContainer for \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:21:51.785471 kubelet[1411]: E1002 19:21:51.785438 1411 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd" Oct 2 19:21:51.785614 kubelet[1411]: E1002 19:21:51.785571 1411 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:21:51.785614 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:21:51.785614 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:21:51.785614 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-78mgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:21:51.785614 kubelet[1411]: E1002 19:21:51.785620 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-h6kbg" podUID="11e5ca46-774e-4fc5-a1d6-8e3983af52a7" Oct 2 19:21:52.415114 kubelet[1411]: E1002 19:21:52.415022 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:52.625353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd-rootfs.mount: Deactivated successfully. Oct 2 19:21:52.721967 kubelet[1411]: I1002 19:21:52.721854 1411 scope.go:117] "RemoveContainer" containerID="21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294" Oct 2 19:21:52.722133 kubelet[1411]: I1002 19:21:52.722119 1411 scope.go:117] "RemoveContainer" containerID="21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294" Oct 2 19:21:52.723216 env[1101]: time="2023-10-02T19:21:52.723178462Z" level=info msg="RemoveContainer for \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\"" Oct 2 19:21:52.723623 env[1101]: time="2023-10-02T19:21:52.723595678Z" level=info msg="RemoveContainer for \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\"" Oct 2 19:21:52.723710 env[1101]: time="2023-10-02T19:21:52.723679176Z" level=error msg="RemoveContainer for \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\" failed" error="failed to set removing state for container \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\": container is already in removing state" Oct 2 19:21:52.723820 kubelet[1411]: E1002 19:21:52.723807 1411 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\": container is already in removing state" containerID="21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294" Oct 2 19:21:52.723882 kubelet[1411]: E1002 19:21:52.723831 1411 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294": container is already in removing state; Skipping pod "cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7)" Oct 2 19:21:52.723882 kubelet[1411]: E1002 19:21:52.723877 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:52.724083 kubelet[1411]: E1002 19:21:52.724072 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7)\"" pod="kube-system/cilium-h6kbg" podUID="11e5ca46-774e-4fc5-a1d6-8e3983af52a7" Oct 2 19:21:52.763965 env[1101]: time="2023-10-02T19:21:52.763905335Z" level=info msg="RemoveContainer for \"21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294\" returns successfully" Oct 2 19:21:52.769239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894826883.mount: Deactivated successfully. Oct 2 19:21:53.415248 kubelet[1411]: E1002 19:21:53.415202 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:53.448519 env[1101]: time="2023-10-02T19:21:53.448466690Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:21:53.450060 env[1101]: time="2023-10-02T19:21:53.450039385Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:21:53.451262 env[1101]: time="2023-10-02T19:21:53.451228937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:21:53.451703 env[1101]: time="2023-10-02T19:21:53.451668525Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 2 19:21:53.453258 env[1101]: time="2023-10-02T19:21:53.453238042Z" level=info msg="CreateContainer within sandbox \"75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:21:53.463540 env[1101]: time="2023-10-02T19:21:53.463508318Z" level=info msg="CreateContainer within sandbox \"75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\"" Oct 2 19:21:53.463961 env[1101]: time="2023-10-02T19:21:53.463923048Z" level=info msg="StartContainer for \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\"" Oct 2 19:21:53.476486 systemd[1]: Started cri-containerd-fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868.scope. Oct 2 19:21:53.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.486013 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:21:53.486083 kernel: audit: type=1400 audit(1696274513.484:681): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.490082 kernel: audit: type=1400 audit(1696274513.484:682): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.490134 kernel: audit: type=1400 audit(1696274513.484:683): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.496607 kernel: audit: type=1400 audit(1696274513.484:684): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.496655 kernel: audit: type=1400 audit(1696274513.484:685): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.498631 kernel: audit: type=1400 audit(1696274513.484:686): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.498691 kernel: audit: type=1400 audit(1696274513.484:687): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.502416 kernel: audit: type=1400 audit(1696274513.484:688): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.502456 kernel: audit: type=1400 audit(1696274513.484:689): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.487000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.506456 kernel: audit: type=1400 audit(1696274513.487:690): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.487000 audit: BPF prog-id=83 op=LOAD Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1969 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:53.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662316434336639613034636338616261373264336633613536356136 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=1969 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:53.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662316434336639613034636338616261373264336633613536356136 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.489000 audit: BPF prog-id=84 op=LOAD Oct 2 19:21:53.489000 audit[2119]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c00020e730 items=0 ppid=1969 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:53.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662316434336639613034636338616261373264336633613536356136 Oct 2 19:21:53.491000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.491000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.491000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.491000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.491000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.491000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.491000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.491000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.491000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.491000 audit: BPF prog-id=85 op=LOAD Oct 2 19:21:53.491000 audit[2119]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c00020e778 items=0 ppid=1969 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:53.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662316434336639613034636338616261373264336633613536356136 Oct 2 19:21:53.493000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:21:53.493000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:21:53.493000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.493000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.493000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.493000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.493000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.493000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.493000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.493000 audit[2119]: AVC avc: denied { perfmon } for pid=2119 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.493000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.493000 audit[2119]: AVC avc: denied { bpf } for pid=2119 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:53.493000 audit: BPF prog-id=86 op=LOAD Oct 2 19:21:53.493000 audit[2119]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c00020eb88 items=0 ppid=1969 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:53.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662316434336639613034636338616261373264336633613536356136 Oct 2 19:21:53.514054 env[1101]: time="2023-10-02T19:21:53.514015769Z" level=info msg="StartContainer for \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\" returns successfully" Oct 2 19:21:53.533000 audit[2130]: AVC avc: denied { map_create } for pid=2130 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c594,c846 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c594,c846 tclass=bpf permissive=0 Oct 2 19:21:53.533000 audit[2130]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c00070f7d0 a2=48 a3=c00070f7c0 items=0 ppid=1969 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c594,c846 key=(null) Oct 2 19:21:53.533000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:21:53.726151 kubelet[1411]: E1002 19:21:53.726068 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:54.415455 kubelet[1411]: E1002 19:21:54.415391 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:54.622526 kubelet[1411]: W1002 19:21:54.622484 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e5ca46_774e_4fc5_a1d6_8e3983af52a7.slice/cri-containerd-21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294.scope WatchSource:0}: container "21c74215b25d4fbbdef9e848fbac35da73216dbe3eae4e1d6ea23e92c8320294" in namespace "k8s.io": not found Oct 2 19:21:54.727463 kubelet[1411]: E1002 19:21:54.727358 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:55.415618 kubelet[1411]: E1002 19:21:55.415546 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:55.661573 kubelet[1411]: E1002 19:21:55.661521 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:56.415968 kubelet[1411]: E1002 19:21:56.415886 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:57.416714 kubelet[1411]: E1002 19:21:57.416651 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:57.728176 kubelet[1411]: W1002 19:21:57.728062 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e5ca46_774e_4fc5_a1d6_8e3983af52a7.slice/cri-containerd-e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd.scope WatchSource:0}: task e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd not found: not found Oct 2 19:21:58.417867 kubelet[1411]: E1002 19:21:58.417801 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:59.418404 kubelet[1411]: E1002 19:21:59.418350 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:00.284468 kubelet[1411]: E1002 19:22:00.284421 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:00.419181 kubelet[1411]: E1002 19:22:00.419144 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:00.662652 kubelet[1411]: E1002 19:22:00.662624 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:01.419664 kubelet[1411]: E1002 19:22:01.419626 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:02.419713 kubelet[1411]: E1002 19:22:02.419684 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:03.420455 kubelet[1411]: E1002 19:22:03.420387 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:04.418549 kubelet[1411]: E1002 19:22:04.418516 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:22:04.420386 env[1101]: time="2023-10-02T19:22:04.420342192Z" level=info msg="CreateContainer within sandbox \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:22:04.421305 kubelet[1411]: E1002 19:22:04.421291 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:04.431476 kubelet[1411]: I1002 19:22:04.431443 1411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-ddvvb" podStartSLOduration=12.457150071 podCreationTimestamp="2023-10-02 19:21:50 +0000 UTC" firstStartedPulling="2023-10-02 19:21:51.47765117 +0000 UTC m=+191.563729933" lastFinishedPulling="2023-10-02 19:21:53.451893359 +0000 UTC m=+193.537972132" observedRunningTime="2023-10-02 19:21:53.732776258 +0000 UTC m=+193.818855031" watchObservedRunningTime="2023-10-02 19:22:04.43139227 +0000 UTC m=+204.517471043" Oct 2 19:22:04.434361 env[1101]: time="2023-10-02T19:22:04.434300088Z" level=info msg="CreateContainer within sandbox \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\"" Oct 2 19:22:04.434868 env[1101]: time="2023-10-02T19:22:04.434826780Z" level=info msg="StartContainer for \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\"" Oct 2 19:22:04.450135 systemd[1]: Started cri-containerd-731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead.scope. Oct 2 19:22:04.458145 systemd[1]: cri-containerd-731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead.scope: Deactivated successfully. Oct 2 19:22:04.458393 systemd[1]: Stopped cri-containerd-731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead.scope. Oct 2 19:22:04.676067 env[1101]: time="2023-10-02T19:22:04.675919255Z" level=info msg="shim disconnected" id=731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead Oct 2 19:22:04.676067 env[1101]: time="2023-10-02T19:22:04.675995249Z" level=warning msg="cleaning up after shim disconnected" id=731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead namespace=k8s.io Oct 2 19:22:04.676067 env[1101]: time="2023-10-02T19:22:04.676004095Z" level=info msg="cleaning up dead shim" Oct 2 19:22:04.682214 env[1101]: time="2023-10-02T19:22:04.682159562Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2179 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:22:04Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:22:04.682479 env[1101]: time="2023-10-02T19:22:04.682417357Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:22:04.682708 env[1101]: time="2023-10-02T19:22:04.682645838Z" level=error msg="Failed to pipe stderr of container \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\"" error="reading from a closed fifo" Oct 2 19:22:04.683189 env[1101]: time="2023-10-02T19:22:04.683147823Z" level=error msg="Failed to pipe stdout of container \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\"" error="reading from a closed fifo" Oct 2 19:22:04.685431 env[1101]: time="2023-10-02T19:22:04.685379487Z" level=error msg="StartContainer for \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:22:04.685647 kubelet[1411]: E1002 19:22:04.685615 1411 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead" Oct 2 19:22:04.686049 kubelet[1411]: E1002 19:22:04.685731 1411 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:22:04.686049 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:22:04.686049 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:22:04.686049 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-78mgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:22:04.686049 kubelet[1411]: E1002 19:22:04.685775 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-h6kbg" podUID="11e5ca46-774e-4fc5-a1d6-8e3983af52a7" Oct 2 19:22:04.742039 kubelet[1411]: I1002 19:22:04.742013 1411 scope.go:117] "RemoveContainer" containerID="e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd" Oct 2 19:22:04.742319 kubelet[1411]: I1002 19:22:04.742303 1411 scope.go:117] "RemoveContainer" containerID="e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd" Oct 2 19:22:04.743209 env[1101]: time="2023-10-02T19:22:04.743176471Z" level=info msg="RemoveContainer for \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\"" Oct 2 19:22:04.743528 env[1101]: time="2023-10-02T19:22:04.743491585Z" level=info msg="RemoveContainer for \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\"" Oct 2 19:22:04.743618 env[1101]: time="2023-10-02T19:22:04.743591173Z" level=error msg="RemoveContainer for \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\" failed" error="failed to set removing state for container \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\": container is already in removing state" Oct 2 19:22:04.743728 kubelet[1411]: E1002 19:22:04.743711 1411 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\": container is already in removing state" containerID="e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd" Oct 2 19:22:04.743786 kubelet[1411]: E1002 19:22:04.743740 1411 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd": container is already in removing state; Skipping pod "cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7)" Oct 2 19:22:04.743832 kubelet[1411]: E1002 19:22:04.743793 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:22:04.744015 kubelet[1411]: E1002 19:22:04.744005 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7)\"" pod="kube-system/cilium-h6kbg" podUID="11e5ca46-774e-4fc5-a1d6-8e3983af52a7" Oct 2 19:22:04.747386 env[1101]: time="2023-10-02T19:22:04.747361726Z" level=info msg="RemoveContainer for \"e0801f3b73c5dd9569e9cd385ad9754d74e038c55f248cacf4ed3c52daf4acfd\" returns successfully" Oct 2 19:22:05.421553 kubelet[1411]: E1002 19:22:05.421498 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:05.430204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead-rootfs.mount: Deactivated successfully. Oct 2 19:22:05.664063 kubelet[1411]: E1002 19:22:05.664031 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:06.422016 kubelet[1411]: E1002 19:22:06.421976 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:07.422201 kubelet[1411]: E1002 19:22:07.422146 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:07.780495 kubelet[1411]: W1002 19:22:07.780372 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e5ca46_774e_4fc5_a1d6_8e3983af52a7.slice/cri-containerd-731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead.scope WatchSource:0}: task 731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead not found: not found Oct 2 19:22:08.422546 kubelet[1411]: E1002 19:22:08.422508 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:09.422789 kubelet[1411]: E1002 19:22:09.422744 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:10.423274 kubelet[1411]: E1002 19:22:10.423235 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:10.664739 kubelet[1411]: E1002 19:22:10.664710 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:11.424034 kubelet[1411]: E1002 19:22:11.423967 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:12.424970 kubelet[1411]: E1002 19:22:12.424926 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:13.425698 kubelet[1411]: E1002 19:22:13.425648 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:14.426691 kubelet[1411]: E1002 19:22:14.426652 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:15.418414 kubelet[1411]: E1002 19:22:15.418360 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:22:15.418598 kubelet[1411]: E1002 19:22:15.418588 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7)\"" pod="kube-system/cilium-h6kbg" podUID="11e5ca46-774e-4fc5-a1d6-8e3983af52a7" Oct 2 19:22:15.427620 kubelet[1411]: E1002 19:22:15.427597 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:15.666147 kubelet[1411]: E1002 19:22:15.666120 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:16.427730 kubelet[1411]: E1002 19:22:16.427695 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:17.428821 kubelet[1411]: E1002 19:22:17.428738 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:18.429673 kubelet[1411]: E1002 19:22:18.429637 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:19.430622 kubelet[1411]: E1002 19:22:19.430535 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:20.285191 kubelet[1411]: E1002 19:22:20.285125 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:20.431336 kubelet[1411]: E1002 19:22:20.431297 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:20.667703 kubelet[1411]: E1002 19:22:20.667672 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:21.432080 kubelet[1411]: E1002 19:22:21.432015 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:22.432873 kubelet[1411]: E1002 19:22:22.432810 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:23.433493 kubelet[1411]: E1002 19:22:23.433445 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:24.434143 kubelet[1411]: E1002 19:22:24.434097 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:25.434451 kubelet[1411]: E1002 19:22:25.434370 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:25.669065 kubelet[1411]: E1002 19:22:25.669026 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:26.434741 kubelet[1411]: E1002 19:22:26.434678 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:27.435150 kubelet[1411]: E1002 19:22:27.435083 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:28.435536 kubelet[1411]: E1002 19:22:28.435494 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:29.417788 kubelet[1411]: E1002 19:22:29.417738 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:22:29.419519 env[1101]: time="2023-10-02T19:22:29.419467862Z" level=info msg="CreateContainer within sandbox \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:22:29.430066 env[1101]: time="2023-10-02T19:22:29.430027565Z" level=info msg="CreateContainer within sandbox \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5\"" Oct 2 19:22:29.430470 env[1101]: time="2023-10-02T19:22:29.430419549Z" level=info msg="StartContainer for \"6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5\"" Oct 2 19:22:29.436783 kubelet[1411]: E1002 19:22:29.436740 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:29.443759 systemd[1]: Started cri-containerd-6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5.scope. Oct 2 19:22:29.449801 systemd[1]: cri-containerd-6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5.scope: Deactivated successfully. Oct 2 19:22:29.450104 systemd[1]: Stopped cri-containerd-6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5.scope. Oct 2 19:22:29.458763 env[1101]: time="2023-10-02T19:22:29.458716564Z" level=info msg="shim disconnected" id=6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5 Oct 2 19:22:29.458894 env[1101]: time="2023-10-02T19:22:29.458766762Z" level=warning msg="cleaning up after shim disconnected" id=6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5 namespace=k8s.io Oct 2 19:22:29.458894 env[1101]: time="2023-10-02T19:22:29.458777492Z" level=info msg="cleaning up dead shim" Oct 2 19:22:29.465034 env[1101]: time="2023-10-02T19:22:29.464973080Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2217 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:22:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:22:29.465278 env[1101]: time="2023-10-02T19:22:29.465223762Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Oct 2 19:22:29.465425 env[1101]: time="2023-10-02T19:22:29.465391726Z" level=error msg="Failed to pipe stdout of container \"6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5\"" error="reading from a closed fifo" Oct 2 19:22:29.466748 env[1101]: time="2023-10-02T19:22:29.466652822Z" level=error msg="Failed to pipe stderr of container \"6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5\"" error="reading from a closed fifo" Oct 2 19:22:29.468352 env[1101]: time="2023-10-02T19:22:29.468317043Z" level=error msg="StartContainer for \"6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:22:29.468568 kubelet[1411]: E1002 19:22:29.468546 1411 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5" Oct 2 19:22:29.468668 kubelet[1411]: E1002 19:22:29.468647 1411 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:22:29.468668 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:22:29.468668 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:22:29.468668 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-78mgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:22:29.468824 kubelet[1411]: E1002 19:22:29.468683 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-h6kbg" podUID="11e5ca46-774e-4fc5-a1d6-8e3983af52a7" Oct 2 19:22:29.782895 kubelet[1411]: I1002 19:22:29.782800 1411 scope.go:117] "RemoveContainer" containerID="731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead" Oct 2 19:22:29.783261 kubelet[1411]: I1002 19:22:29.783237 1411 scope.go:117] "RemoveContainer" containerID="731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead" Oct 2 19:22:29.784383 env[1101]: time="2023-10-02T19:22:29.784343187Z" level=info msg="RemoveContainer for \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\"" Oct 2 19:22:29.784548 env[1101]: time="2023-10-02T19:22:29.784527772Z" level=info msg="RemoveContainer for \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\"" Oct 2 19:22:29.784705 env[1101]: time="2023-10-02T19:22:29.784646721Z" level=error msg="RemoveContainer for \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\" failed" error="failed to set removing state for container \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\": container is already in removing state" Oct 2 19:22:29.784908 kubelet[1411]: E1002 19:22:29.784752 1411 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\": container is already in removing state" containerID="731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead" Oct 2 19:22:29.784908 kubelet[1411]: E1002 19:22:29.784774 1411 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead": container is already in removing state; Skipping pod "cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7)" Oct 2 19:22:29.784908 kubelet[1411]: E1002 19:22:29.784822 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:22:29.785094 kubelet[1411]: E1002 19:22:29.785050 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7)\"" pod="kube-system/cilium-h6kbg" podUID="11e5ca46-774e-4fc5-a1d6-8e3983af52a7" Oct 2 19:22:29.787062 env[1101]: time="2023-10-02T19:22:29.787031579Z" level=info msg="RemoveContainer for \"731b632e23161aa73ea595c90055a8371a706b60d01b1f06e204a0a5bbf44ead\" returns successfully" Oct 2 19:22:30.426252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5-rootfs.mount: Deactivated successfully. Oct 2 19:22:30.437054 kubelet[1411]: E1002 19:22:30.436996 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:30.670125 kubelet[1411]: E1002 19:22:30.670092 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:31.438142 kubelet[1411]: E1002 19:22:31.438091 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:32.438483 kubelet[1411]: E1002 19:22:32.438445 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:32.563344 kubelet[1411]: W1002 19:22:32.563315 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e5ca46_774e_4fc5_a1d6_8e3983af52a7.slice/cri-containerd-6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5.scope WatchSource:0}: task 6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5 not found: not found Oct 2 19:22:33.439279 kubelet[1411]: E1002 19:22:33.439216 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:34.439783 kubelet[1411]: E1002 19:22:34.439738 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:35.439912 kubelet[1411]: E1002 19:22:35.439840 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:35.670910 kubelet[1411]: E1002 19:22:35.670878 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:36.440112 kubelet[1411]: E1002 19:22:36.440062 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:37.440953 kubelet[1411]: E1002 19:22:37.440890 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:38.441737 kubelet[1411]: E1002 19:22:38.441699 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:39.442884 kubelet[1411]: E1002 19:22:39.442834 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:40.285411 kubelet[1411]: E1002 19:22:40.285367 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:40.295171 env[1101]: time="2023-10-02T19:22:40.295135308Z" level=info msg="StopPodSandbox for \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\"" Oct 2 19:22:40.295494 env[1101]: time="2023-10-02T19:22:40.295220412Z" level=info msg="TearDown network for sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" successfully" Oct 2 19:22:40.295494 env[1101]: time="2023-10-02T19:22:40.295263805Z" level=info msg="StopPodSandbox for \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" returns successfully" Oct 2 19:22:40.295598 env[1101]: time="2023-10-02T19:22:40.295560414Z" level=info msg="RemovePodSandbox for \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\"" Oct 2 19:22:40.295647 env[1101]: time="2023-10-02T19:22:40.295601663Z" level=info msg="Forcibly stopping sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\"" Oct 2 19:22:40.295729 env[1101]: time="2023-10-02T19:22:40.295709410Z" level=info msg="TearDown network for sandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" successfully" Oct 2 19:22:40.298671 env[1101]: time="2023-10-02T19:22:40.298640431Z" level=info msg="RemovePodSandbox \"177c97a340324fdfa54a0e412eedcb7bc48e43fe1671d3759c6f00560f24b86a\" returns successfully" Oct 2 19:22:40.443040 kubelet[1411]: E1002 19:22:40.442982 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:40.672271 kubelet[1411]: E1002 19:22:40.672222 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:41.418694 kubelet[1411]: E1002 19:22:41.418637 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:22:41.443361 kubelet[1411]: E1002 19:22:41.443319 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:42.443889 kubelet[1411]: E1002 19:22:42.443851 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:43.444258 kubelet[1411]: E1002 19:22:43.444199 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:44.418704 kubelet[1411]: E1002 19:22:44.418659 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:22:44.418963 kubelet[1411]: E1002 19:22:44.418920 1411 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-h6kbg_kube-system(11e5ca46-774e-4fc5-a1d6-8e3983af52a7)\"" pod="kube-system/cilium-h6kbg" podUID="11e5ca46-774e-4fc5-a1d6-8e3983af52a7" Oct 2 19:22:44.444514 kubelet[1411]: E1002 19:22:44.444452 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:45.445213 kubelet[1411]: E1002 19:22:45.445120 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:45.673810 kubelet[1411]: E1002 19:22:45.673769 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:46.445526 kubelet[1411]: E1002 19:22:46.445467 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:47.446589 kubelet[1411]: E1002 19:22:47.446529 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:48.447390 kubelet[1411]: E1002 19:22:48.447334 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:49.447728 kubelet[1411]: E1002 19:22:49.447638 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:50.447893 kubelet[1411]: E1002 19:22:50.447819 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:50.674825 kubelet[1411]: E1002 19:22:50.674796 1411 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:51.448955 kubelet[1411]: E1002 19:22:51.448879 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:51.854102 env[1101]: time="2023-10-02T19:22:51.854063517Z" level=info msg="StopPodSandbox for \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\"" Oct 2 19:22:51.854518 env[1101]: time="2023-10-02T19:22:51.854121207Z" level=info msg="Container to stop \"6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:22:51.855535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959-shm.mount: Deactivated successfully. Oct 2 19:22:51.858849 systemd[1]: cri-containerd-28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959.scope: Deactivated successfully. Oct 2 19:22:51.858000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:22:51.860681 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:22:51.860755 kernel: audit: type=1334 audit(1696274571.858:700): prog-id=79 op=UNLOAD Oct 2 19:22:51.863000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:22:51.864951 kernel: audit: type=1334 audit(1696274571.863:701): prog-id=82 op=UNLOAD Oct 2 19:22:51.874374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959-rootfs.mount: Deactivated successfully. Oct 2 19:22:51.993203 env[1101]: time="2023-10-02T19:22:51.993141229Z" level=info msg="StopContainer for \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\" with timeout 30 (s)" Oct 2 19:22:51.993530 env[1101]: time="2023-10-02T19:22:51.993508312Z" level=info msg="Stop container \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\" with signal terminated" Oct 2 19:22:52.002685 env[1101]: time="2023-10-02T19:22:52.002638819Z" level=info msg="shim disconnected" id=28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959 Oct 2 19:22:52.002980 env[1101]: time="2023-10-02T19:22:52.002930217Z" level=warning msg="cleaning up after shim disconnected" id=28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959 namespace=k8s.io Oct 2 19:22:52.002980 env[1101]: time="2023-10-02T19:22:52.002965104Z" level=info msg="cleaning up dead shim" Oct 2 19:22:52.005000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:22:52.006718 systemd[1]: cri-containerd-fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868.scope: Deactivated successfully. Oct 2 19:22:52.007990 kernel: audit: type=1334 audit(1696274572.005:702): prog-id=83 op=UNLOAD Oct 2 19:22:52.013805 env[1101]: time="2023-10-02T19:22:52.013748902Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2255 runtime=io.containerd.runc.v2\n" Oct 2 19:22:52.013000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:22:52.014376 env[1101]: time="2023-10-02T19:22:52.014348568Z" level=info msg="TearDown network for sandbox \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\" successfully" Oct 2 19:22:52.014376 env[1101]: time="2023-10-02T19:22:52.014370200Z" level=info msg="StopPodSandbox for \"28b00585ed33a48d80c30b80c41a7c14af5fb93f26a7eff58643abba3c3e4959\" returns successfully" Oct 2 19:22:52.014961 kernel: audit: type=1334 audit(1696274572.013:703): prog-id=86 op=UNLOAD Oct 2 19:22:52.023279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868-rootfs.mount: Deactivated successfully. Oct 2 19:22:52.034779 kubelet[1411]: I1002 19:22:52.034748 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-host-proc-sys-net\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.034916 kubelet[1411]: I1002 19:22:52.034780 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:52.034916 kubelet[1411]: I1002 19:22:52.034809 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-hostproc\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.034916 kubelet[1411]: I1002 19:22:52.034829 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-xtables-lock\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.034916 kubelet[1411]: I1002 19:22:52.034842 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-hostproc" (OuterVolumeSpecName: "hostproc") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:52.034916 kubelet[1411]: I1002 19:22:52.034845 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-lib-modules\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.034916 kubelet[1411]: I1002 19:22:52.034866 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:52.034916 kubelet[1411]: I1002 19:22:52.034880 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-hubble-tls\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.034916 kubelet[1411]: I1002 19:22:52.034889 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:52.034916 kubelet[1411]: I1002 19:22:52.034900 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-run\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.034916 kubelet[1411]: I1002 19:22:52.034923 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-config-path\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.034951 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-bpf-maps\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.034971 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78mgd\" (UniqueName: \"kubernetes.io/projected/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-kube-api-access-78mgd\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.034992 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-ipsec-secrets\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.035009 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-clustermesh-secrets\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.035027 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-cgroup\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.035051 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-host-proc-sys-kernel\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.035072 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cni-path\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.035089 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-etc-cni-netd\") pod \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\" (UID: \"11e5ca46-774e-4fc5-a1d6-8e3983af52a7\") " Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.035117 1411 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-lib-modules\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.035128 1411 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-hostproc\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.035137 1411 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-xtables-lock\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.035146 1411 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-host-proc-sys-net\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.035167 kubelet[1411]: I1002 19:22:52.035161 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:52.035498 kubelet[1411]: I1002 19:22:52.035397 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:52.035498 kubelet[1411]: I1002 19:22:52.035413 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:52.035498 kubelet[1411]: I1002 19:22:52.035426 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cni-path" (OuterVolumeSpecName: "cni-path") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:52.035572 kubelet[1411]: I1002 19:22:52.035489 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:52.035572 kubelet[1411]: I1002 19:22:52.035538 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:52.037796 systemd[1]: var-lib-kubelet-pods-11e5ca46\x2d774e\x2d4fc5\x2da1d6\x2d8e3983af52a7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:22:52.039969 kubelet[1411]: I1002 19:22:52.038371 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:22:52.039969 kubelet[1411]: I1002 19:22:52.038432 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:22:52.039160 systemd[1]: var-lib-kubelet-pods-11e5ca46\x2d774e\x2d4fc5\x2da1d6\x2d8e3983af52a7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:22:52.039228 systemd[1]: var-lib-kubelet-pods-11e5ca46\x2d774e\x2d4fc5\x2da1d6\x2d8e3983af52a7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:22:52.040759 kubelet[1411]: I1002 19:22:52.040710 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-kube-api-access-78mgd" (OuterVolumeSpecName: "kube-api-access-78mgd") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "kube-api-access-78mgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:22:52.040759 kubelet[1411]: I1002 19:22:52.040721 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:22:52.040961 kubelet[1411]: I1002 19:22:52.040870 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "11e5ca46-774e-4fc5-a1d6-8e3983af52a7" (UID: "11e5ca46-774e-4fc5-a1d6-8e3983af52a7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:22:52.044704 env[1101]: time="2023-10-02T19:22:52.044646453Z" level=info msg="shim disconnected" id=fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868 Oct 2 19:22:52.044830 env[1101]: time="2023-10-02T19:22:52.044704264Z" level=warning msg="cleaning up after shim disconnected" id=fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868 namespace=k8s.io Oct 2 19:22:52.044830 env[1101]: time="2023-10-02T19:22:52.044715375Z" level=info msg="cleaning up dead shim" Oct 2 19:22:52.051212 env[1101]: time="2023-10-02T19:22:52.051159872Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2284 runtime=io.containerd.runc.v2\n" Oct 2 19:22:52.092891 env[1101]: time="2023-10-02T19:22:52.092829698Z" level=info msg="StopContainer for \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\" returns successfully" Oct 2 19:22:52.093482 env[1101]: time="2023-10-02T19:22:52.093445736Z" level=info msg="StopPodSandbox for \"75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746\"" Oct 2 19:22:52.093533 env[1101]: time="2023-10-02T19:22:52.093505961Z" level=info msg="Container to stop \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:22:52.099317 systemd[1]: cri-containerd-75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746.scope: Deactivated successfully. Oct 2 19:22:52.098000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:22:52.100959 kernel: audit: type=1334 audit(1696274572.098:704): prog-id=75 op=UNLOAD Oct 2 19:22:52.103000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:22:52.105000 kernel: audit: type=1334 audit(1696274572.103:705): prog-id=78 op=UNLOAD Oct 2 19:22:52.135344 kubelet[1411]: I1002 19:22:52.135288 1411 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-bpf-maps\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.135344 kubelet[1411]: I1002 19:22:52.135319 1411 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-78mgd\" (UniqueName: \"kubernetes.io/projected/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-kube-api-access-78mgd\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.135344 kubelet[1411]: I1002 19:22:52.135328 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-config-path\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.135344 kubelet[1411]: I1002 19:22:52.135336 1411 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-clustermesh-secrets\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.135344 kubelet[1411]: I1002 19:22:52.135345 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-cgroup\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.135344 kubelet[1411]: I1002 19:22:52.135354 1411 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-host-proc-sys-kernel\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.135344 kubelet[1411]: I1002 19:22:52.135362 1411 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cni-path\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.135700 kubelet[1411]: I1002 19:22:52.135372 1411 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-etc-cni-netd\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.135700 kubelet[1411]: I1002 19:22:52.135381 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-ipsec-secrets\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.135700 kubelet[1411]: I1002 19:22:52.135390 1411 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-hubble-tls\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.135700 kubelet[1411]: I1002 19:22:52.135398 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11e5ca46-774e-4fc5-a1d6-8e3983af52a7-cilium-run\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.167670 env[1101]: time="2023-10-02T19:22:52.167595680Z" level=info msg="shim disconnected" id=75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746 Oct 2 19:22:52.167670 env[1101]: time="2023-10-02T19:22:52.167653050Z" level=warning msg="cleaning up after shim disconnected" id=75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746 namespace=k8s.io Oct 2 19:22:52.167670 env[1101]: time="2023-10-02T19:22:52.167666907Z" level=info msg="cleaning up dead shim" Oct 2 19:22:52.174535 env[1101]: time="2023-10-02T19:22:52.174507872Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2314 runtime=io.containerd.runc.v2\n" Oct 2 19:22:52.174885 env[1101]: time="2023-10-02T19:22:52.174855897Z" level=info msg="TearDown network for sandbox \"75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746\" successfully" Oct 2 19:22:52.174992 env[1101]: time="2023-10-02T19:22:52.174968253Z" level=info msg="StopPodSandbox for \"75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746\" returns successfully" Oct 2 19:22:52.236360 kubelet[1411]: I1002 19:22:52.236316 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxxg8\" (UniqueName: \"kubernetes.io/projected/3341abbd-5c7e-482b-a180-c6399f0955c6-kube-api-access-dxxg8\") pod \"3341abbd-5c7e-482b-a180-c6399f0955c6\" (UID: \"3341abbd-5c7e-482b-a180-c6399f0955c6\") " Oct 2 19:22:52.236360 kubelet[1411]: I1002 19:22:52.236362 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3341abbd-5c7e-482b-a180-c6399f0955c6-cilium-config-path\") pod \"3341abbd-5c7e-482b-a180-c6399f0955c6\" (UID: \"3341abbd-5c7e-482b-a180-c6399f0955c6\") " Oct 2 19:22:52.238580 kubelet[1411]: I1002 19:22:52.238552 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3341abbd-5c7e-482b-a180-c6399f0955c6-kube-api-access-dxxg8" (OuterVolumeSpecName: "kube-api-access-dxxg8") pod "3341abbd-5c7e-482b-a180-c6399f0955c6" (UID: "3341abbd-5c7e-482b-a180-c6399f0955c6"). InnerVolumeSpecName "kube-api-access-dxxg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:22:52.238786 kubelet[1411]: I1002 19:22:52.238760 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3341abbd-5c7e-482b-a180-c6399f0955c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3341abbd-5c7e-482b-a180-c6399f0955c6" (UID: "3341abbd-5c7e-482b-a180-c6399f0955c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:22:52.337085 kubelet[1411]: I1002 19:22:52.337029 1411 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dxxg8\" (UniqueName: \"kubernetes.io/projected/3341abbd-5c7e-482b-a180-c6399f0955c6-kube-api-access-dxxg8\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.337085 kubelet[1411]: I1002 19:22:52.337064 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3341abbd-5c7e-482b-a180-c6399f0955c6-cilium-config-path\") on node \"10.0.0.130\" DevicePath \"\"" Oct 2 19:22:52.423451 systemd[1]: Removed slice kubepods-burstable-pod11e5ca46_774e_4fc5_a1d6_8e3983af52a7.slice. Oct 2 19:22:52.424579 systemd[1]: Removed slice kubepods-besteffort-pod3341abbd_5c7e_482b_a180_c6399f0955c6.slice. Oct 2 19:22:52.449230 kubelet[1411]: E1002 19:22:52.449200 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:52.817989 kubelet[1411]: I1002 19:22:52.817967 1411 scope.go:117] "RemoveContainer" containerID="fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868" Oct 2 19:22:52.818900 env[1101]: time="2023-10-02T19:22:52.818857211Z" level=info msg="RemoveContainer for \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\"" Oct 2 19:22:52.822192 env[1101]: time="2023-10-02T19:22:52.822150130Z" level=info msg="RemoveContainer for \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\" returns successfully" Oct 2 19:22:52.822331 kubelet[1411]: I1002 19:22:52.822304 1411 scope.go:117] "RemoveContainer" containerID="fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868" Oct 2 19:22:52.822583 env[1101]: time="2023-10-02T19:22:52.822509528Z" level=error msg="ContainerStatus for \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\": not found" Oct 2 19:22:52.822783 kubelet[1411]: E1002 19:22:52.822761 1411 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\": not found" containerID="fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868" Oct 2 19:22:52.822844 kubelet[1411]: I1002 19:22:52.822838 1411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868"} err="failed to get container status \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb1d43f9a04cc8aba72d3f3a565a6f145435b7ad75432319844b5e746ac89868\": not found" Oct 2 19:22:52.822885 kubelet[1411]: I1002 19:22:52.822848 1411 scope.go:117] "RemoveContainer" containerID="6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5" Oct 2 19:22:52.823642 env[1101]: time="2023-10-02T19:22:52.823616284Z" level=info msg="RemoveContainer for \"6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5\"" Oct 2 19:22:52.825828 env[1101]: time="2023-10-02T19:22:52.825790684Z" level=info msg="RemoveContainer for \"6495c5fbca49f89984f05dee4f6733a8bdbf487f27e991988387b6b834f180f5\" returns successfully" Oct 2 19:22:52.855218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746-rootfs.mount: Deactivated successfully. Oct 2 19:22:52.855317 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75035aa9e6e12d566324ae23883a3426099a68dd1af685be80323b7bba55b746-shm.mount: Deactivated successfully. Oct 2 19:22:52.855370 systemd[1]: var-lib-kubelet-pods-11e5ca46\x2d774e\x2d4fc5\x2da1d6\x2d8e3983af52a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d78mgd.mount: Deactivated successfully. Oct 2 19:22:52.855423 systemd[1]: var-lib-kubelet-pods-3341abbd\x2d5c7e\x2d482b\x2da180\x2dc6399f0955c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxxg8.mount: Deactivated successfully. Oct 2 19:22:53.450121 kubelet[1411]: E1002 19:22:53.450073 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"