Oct 2 19:39:11.082165 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:39:11.082203 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:39:11.082238 kernel: BIOS-provided physical RAM map: Oct 2 19:39:11.082252 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Oct 2 19:39:11.082264 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Oct 2 19:39:11.082277 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Oct 2 19:39:11.082306 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Oct 2 19:39:11.082320 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Oct 2 19:39:11.082334 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Oct 2 19:39:11.082348 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Oct 2 19:39:11.082362 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Oct 2 19:39:11.082375 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Oct 2 19:39:11.082389 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Oct 2 19:39:11.082402 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Oct 2 19:39:11.082437 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Oct 2 19:39:11.082453 kernel: NX (Execute Disable) protection: active Oct 2 19:39:11.082467 kernel: efi: EFI v2.70 by EDK II Oct 2 19:39:11.082495 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbe386218 RNG=0xbfb73018 TPMEventLog=0xbe2c8018 Oct 2 19:39:11.082519 kernel: random: crng init done Oct 2 19:39:11.082534 kernel: SMBIOS 2.4 present. Oct 2 19:39:11.082549 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/04/2023 Oct 2 19:39:11.082564 kernel: Hypervisor detected: KVM Oct 2 19:39:11.082582 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:39:11.082608 kernel: kvm-clock: cpu 0, msr 21cf8a001, primary cpu clock Oct 2 19:39:11.082623 kernel: kvm-clock: using sched offset of 12697224273 cycles Oct 2 19:39:11.082638 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:39:11.082653 kernel: tsc: Detected 2299.998 MHz processor Oct 2 19:39:11.082668 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:39:11.082684 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:39:11.082700 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Oct 2 19:39:11.082715 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:39:11.082731 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Oct 2 19:39:11.082750 kernel: Using GB pages for direct mapping Oct 2 19:39:11.082765 kernel: Secure boot disabled Oct 2 19:39:11.082778 kernel: ACPI: Early table checksum verification disabled Oct 2 19:39:11.082792 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Oct 2 19:39:11.082806 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Oct 2 19:39:11.082820 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Oct 2 19:39:11.082835 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Oct 2 19:39:11.082851 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Oct 2 19:39:11.082876 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Oct 2 19:39:11.082893 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Oct 2 19:39:11.082909 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Oct 2 19:39:11.082926 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Oct 2 19:39:11.082943 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Oct 2 19:39:11.082959 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Oct 2 19:39:11.082979 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Oct 2 19:39:11.082995 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Oct 2 19:39:11.083011 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Oct 2 19:39:11.083027 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Oct 2 19:39:11.083044 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Oct 2 19:39:11.083060 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Oct 2 19:39:11.083076 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Oct 2 19:39:11.083092 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Oct 2 19:39:11.083109 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Oct 2 19:39:11.083129 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 2 19:39:11.083145 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 2 19:39:11.083159 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 2 19:39:11.083175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Oct 2 19:39:11.083191 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Oct 2 19:39:11.083207 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Oct 2 19:39:11.083224 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Oct 2 19:39:11.083240 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Oct 2 19:39:11.083256 kernel: Zone ranges: Oct 2 19:39:11.083276 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:39:11.083293 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 2 19:39:11.083317 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Oct 2 19:39:11.083337 kernel: Movable zone start for each node Oct 2 19:39:11.083352 kernel: Early memory node ranges Oct 2 19:39:11.083368 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Oct 2 19:39:11.083385 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Oct 2 19:39:11.083401 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Oct 2 19:39:11.083417 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Oct 2 19:39:11.083437 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Oct 2 19:39:11.083454 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Oct 2 19:39:11.083470 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:39:11.085941 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Oct 2 19:39:11.085968 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Oct 2 19:39:11.085985 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 2 19:39:11.086001 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Oct 2 19:39:11.086018 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 2 19:39:11.086169 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:39:11.086195 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:39:11.086211 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:39:11.086228 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:39:11.086242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:39:11.086257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:39:11.086273 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:39:11.086291 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 2 19:39:11.086317 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 2 19:39:11.086332 kernel: Booting paravirtualized kernel on KVM Oct 2 19:39:11.086352 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:39:11.086367 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Oct 2 19:39:11.086382 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Oct 2 19:39:11.086397 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Oct 2 19:39:11.086557 kernel: pcpu-alloc: [0] 0 1 Oct 2 19:39:11.086575 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:39:11.086593 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:39:11.086609 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1931256 Oct 2 19:39:11.086763 kernel: Policy zone: Normal Oct 2 19:39:11.086789 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:39:11.086806 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:39:11.086889 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 2 19:39:11.086912 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:39:11.086931 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:39:11.086948 kernel: Memory: 7536584K/7860584K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 323740K reserved, 0K cma-reserved) Oct 2 19:39:11.086964 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:39:11.086980 kernel: Kernel/User page tables isolation: enabled Oct 2 19:39:11.087000 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:39:11.087016 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:39:11.087032 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:39:11.087050 kernel: rcu: RCU event tracing is enabled. Oct 2 19:39:11.087066 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:39:11.087082 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:39:11.087098 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:39:11.087114 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:39:11.087130 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:39:11.087151 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 2 19:39:11.087181 kernel: Console: colour dummy device 80x25 Oct 2 19:39:11.087197 kernel: printk: console [ttyS0] enabled Oct 2 19:39:11.087218 kernel: ACPI: Core revision 20210730 Oct 2 19:39:11.087235 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:39:11.087251 kernel: x2apic enabled Oct 2 19:39:11.087267 kernel: Switched APIC routing to physical x2apic. Oct 2 19:39:11.087285 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Oct 2 19:39:11.087309 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 2 19:39:11.087327 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Oct 2 19:39:11.087348 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Oct 2 19:39:11.087364 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Oct 2 19:39:11.087382 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:39:11.087399 kernel: Spectre V2 : Mitigation: IBRS Oct 2 19:39:11.087416 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:39:11.087433 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:39:11.087453 kernel: RETBleed: Mitigation: IBRS Oct 2 19:39:11.087470 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:39:11.087510 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Oct 2 19:39:11.087527 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:39:11.087545 kernel: MDS: Mitigation: Clear CPU buffers Oct 2 19:39:11.087562 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:39:11.087579 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:39:11.087596 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:39:11.087613 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:39:11.087634 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:39:11.087651 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:39:11.087668 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:39:11.087685 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:39:11.087702 kernel: LSM: Security Framework initializing Oct 2 19:39:11.087719 kernel: SELinux: Initializing. Oct 2 19:39:11.087736 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:39:11.087753 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:39:11.087771 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Oct 2 19:39:11.087792 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Oct 2 19:39:11.087808 kernel: signal: max sigframe size: 1776 Oct 2 19:39:11.087826 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:39:11.087843 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 2 19:39:11.087859 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:39:11.087876 kernel: x86: Booting SMP configuration: Oct 2 19:39:11.087893 kernel: .... node #0, CPUs: #1 Oct 2 19:39:11.087910 kernel: kvm-clock: cpu 1, msr 21cf8a041, secondary cpu clock Oct 2 19:39:11.087928 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 2 19:39:11.087950 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 2 19:39:11.087967 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:39:11.087984 kernel: smpboot: Max logical packages: 1 Oct 2 19:39:11.088001 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Oct 2 19:39:11.088018 kernel: devtmpfs: initialized Oct 2 19:39:11.088035 kernel: x86/mm: Memory block size: 128MB Oct 2 19:39:11.088051 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Oct 2 19:39:11.088069 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:39:11.088086 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:39:11.088107 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:39:11.088124 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:39:11.088141 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:39:11.088158 kernel: audit: type=2000 audit(1696275549.814:1): state=initialized audit_enabled=0 res=1 Oct 2 19:39:11.088175 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:39:11.088192 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:39:11.088208 kernel: cpuidle: using governor menu Oct 2 19:39:11.088225 kernel: ACPI: bus type PCI registered Oct 2 19:39:11.088242 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:39:11.088263 kernel: dca service started, version 1.12.1 Oct 2 19:39:11.088280 kernel: PCI: Using configuration type 1 for base access Oct 2 19:39:11.088304 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:39:11.088321 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:39:11.088338 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:39:11.088355 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:39:11.088372 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:39:11.088389 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:39:11.088405 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:39:11.088426 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:39:11.088443 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:39:11.088460 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:39:11.088477 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 2 19:39:11.088509 kernel: ACPI: Interpreter enabled Oct 2 19:39:11.088532 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:39:11.088554 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:39:11.088571 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:39:11.088586 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 2 19:39:11.088606 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:39:11.088835 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:39:11.089019 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Oct 2 19:39:11.089043 kernel: PCI host bridge to bus 0000:00 Oct 2 19:39:11.089203 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:39:11.089360 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:39:11.100310 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:39:11.100784 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Oct 2 19:39:11.101164 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:39:11.101356 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:39:11.101559 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Oct 2 19:39:11.101738 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:39:11.101903 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 2 19:39:11.102084 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Oct 2 19:39:11.102248 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Oct 2 19:39:11.102421 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Oct 2 19:39:11.102623 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:39:11.102790 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Oct 2 19:39:11.102954 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Oct 2 19:39:11.103139 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:39:11.103309 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:39:11.103470 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Oct 2 19:39:11.115327 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:39:11.115356 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:39:11.115375 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:39:11.115393 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:39:11.115410 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:39:11.115437 kernel: iommu: Default domain type: Translated Oct 2 19:39:11.115455 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:39:11.115473 kernel: vgaarb: loaded Oct 2 19:39:11.115507 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:39:11.115525 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:39:11.115543 kernel: PTP clock support registered Oct 2 19:39:11.115561 kernel: Registered efivars operations Oct 2 19:39:11.115579 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:39:11.115597 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:39:11.115618 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Oct 2 19:39:11.115637 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Oct 2 19:39:11.115655 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Oct 2 19:39:11.115672 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Oct 2 19:39:11.115689 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:39:11.115707 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:39:11.115725 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:39:11.115742 kernel: pnp: PnP ACPI init Oct 2 19:39:11.115760 kernel: pnp: PnP ACPI: found 7 devices Oct 2 19:39:11.115782 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:39:11.115799 kernel: NET: Registered PF_INET protocol family Oct 2 19:39:11.115816 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 19:39:11.115834 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 2 19:39:11.115852 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:39:11.115870 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:39:11.115887 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Oct 2 19:39:11.115905 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 2 19:39:11.115923 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 2 19:39:11.115945 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 2 19:39:11.115962 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:39:11.115979 kernel: NET: Registered PF_XDP protocol family Oct 2 19:39:11.116168 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:39:11.116381 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:39:11.116544 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:39:11.116688 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Oct 2 19:39:11.116858 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:39:11.116889 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:39:11.116907 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 2 19:39:11.116926 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Oct 2 19:39:11.116944 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 2 19:39:11.116962 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 2 19:39:11.116980 kernel: clocksource: Switched to clocksource tsc Oct 2 19:39:11.116998 kernel: Initialise system trusted keyrings Oct 2 19:39:11.117016 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 2 19:39:11.117036 kernel: Key type asymmetric registered Oct 2 19:39:11.117053 kernel: Asymmetric key parser 'x509' registered Oct 2 19:39:11.117071 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:39:11.117088 kernel: io scheduler mq-deadline registered Oct 2 19:39:11.117106 kernel: io scheduler kyber registered Oct 2 19:39:11.117123 kernel: io scheduler bfq registered Oct 2 19:39:11.117141 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:39:11.117160 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:39:11.117329 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Oct 2 19:39:11.117357 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:39:11.119632 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Oct 2 19:39:11.119670 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:39:11.119853 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Oct 2 19:39:11.119878 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:39:11.119897 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:39:11.119916 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 2 19:39:11.119934 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Oct 2 19:39:11.119951 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Oct 2 19:39:11.120129 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Oct 2 19:39:11.120155 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:39:11.120173 kernel: i8042: Warning: Keylock active Oct 2 19:39:11.120190 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:39:11.120209 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:39:11.120380 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 2 19:39:11.120547 kernel: rtc_cmos 00:00: registered as rtc0 Oct 2 19:39:11.120701 kernel: rtc_cmos 00:00: setting system clock to 2023-10-02T19:39:10 UTC (1696275550) Oct 2 19:39:11.120847 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 2 19:39:11.120869 kernel: intel_pstate: CPU model not supported Oct 2 19:39:11.120888 kernel: pstore: Registered efi as persistent store backend Oct 2 19:39:11.120906 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:39:11.120923 kernel: Segment Routing with IPv6 Oct 2 19:39:11.120941 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:39:11.120958 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:39:11.120976 kernel: Key type dns_resolver registered Oct 2 19:39:11.120998 kernel: IPI shorthand broadcast: enabled Oct 2 19:39:11.121016 kernel: sched_clock: Marking stable (709481163, 123710842)->(859881919, -26689914) Oct 2 19:39:11.121034 kernel: registered taskstats version 1 Oct 2 19:39:11.121051 kernel: Loading compiled-in X.509 certificates Oct 2 19:39:11.121069 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:39:11.121087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:39:11.121105 kernel: Key type .fscrypt registered Oct 2 19:39:11.121122 kernel: Key type fscrypt-provisioning registered Oct 2 19:39:11.121140 kernel: pstore: Using crash dump compression: deflate Oct 2 19:39:11.121161 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:39:11.121178 kernel: ima: No architecture policies found Oct 2 19:39:11.121195 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:39:11.121213 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:39:11.121230 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:39:11.121248 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:39:11.121265 kernel: Run /init as init process Oct 2 19:39:11.121283 kernel: with arguments: Oct 2 19:39:11.121310 kernel: /init Oct 2 19:39:11.121328 kernel: with environment: Oct 2 19:39:11.121345 kernel: HOME=/ Oct 2 19:39:11.121362 kernel: TERM=linux Oct 2 19:39:11.121379 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:39:11.121401 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:39:11.121423 systemd[1]: Detected virtualization kvm. Oct 2 19:39:11.121442 systemd[1]: Detected architecture x86-64. Oct 2 19:39:11.121462 systemd[1]: Running in initrd. Oct 2 19:39:11.121480 systemd[1]: No hostname configured, using default hostname. Oct 2 19:39:11.123603 systemd[1]: Hostname set to . Oct 2 19:39:11.123626 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:39:11.123645 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:39:11.123664 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:39:11.123683 systemd[1]: Reached target cryptsetup.target. Oct 2 19:39:11.123701 systemd[1]: Reached target paths.target. Oct 2 19:39:11.123726 systemd[1]: Reached target slices.target. Oct 2 19:39:11.123744 systemd[1]: Reached target swap.target. Oct 2 19:39:11.123762 systemd[1]: Reached target timers.target. Oct 2 19:39:11.123781 systemd[1]: Listening on iscsid.socket. Oct 2 19:39:11.123799 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:39:11.123818 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:39:11.123837 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:39:11.123860 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:39:11.123878 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:39:11.123896 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:39:11.123915 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:39:11.123934 systemd[1]: Reached target sockets.target. Oct 2 19:39:11.123952 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:39:11.123971 systemd[1]: Finished network-cleanup.service. Oct 2 19:39:11.123990 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:39:11.124009 systemd[1]: Starting systemd-journald.service... Oct 2 19:39:11.124031 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:39:11.124049 systemd[1]: Starting systemd-resolved.service... Oct 2 19:39:11.124069 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:39:11.124105 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:39:11.124128 kernel: audit: type=1130 audit(1696275551.102:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.124147 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:39:11.124166 kernel: audit: type=1130 audit(1696275551.109:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.124188 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:39:11.124213 systemd-journald[190]: Journal started Oct 2 19:39:11.124313 systemd-journald[190]: Runtime Journal (/run/log/journal/35b37c116bf4398c2ff6bc577b6f2bb1) is 8.0M, max 148.8M, 140.8M free. Oct 2 19:39:11.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.117920 systemd-modules-load[191]: Inserted module 'overlay' Oct 2 19:39:11.132634 kernel: audit: type=1130 audit(1696275551.124:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.132667 systemd[1]: Started systemd-journald.service. Oct 2 19:39:11.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.142510 kernel: audit: type=1130 audit(1696275551.136:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.142775 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:39:11.152875 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:39:11.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.170200 systemd-resolved[192]: Positive Trust Anchors: Oct 2 19:39:11.170214 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:39:11.170272 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:39:11.176644 kernel: audit: type=1130 audit(1696275551.170:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.171290 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:39:11.182001 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:39:11.186611 systemd-resolved[192]: Defaulting to hostname 'linux'. Oct 2 19:39:11.188517 systemd[1]: Started systemd-resolved.service. Oct 2 19:39:11.188698 systemd[1]: Reached target nss-lookup.target. Oct 2 19:39:11.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.192503 kernel: audit: type=1130 audit(1696275551.187:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.192544 kernel: Bridge firewalling registered Oct 2 19:39:11.193202 systemd-modules-load[191]: Inserted module 'br_netfilter' Oct 2 19:39:11.204082 kernel: audit: type=1130 audit(1696275551.193:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.193505 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:39:11.199199 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:39:11.218334 dracut-cmdline[206]: dracut-dracut-053 Oct 2 19:39:11.222942 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:39:11.232620 kernel: SCSI subsystem initialized Oct 2 19:39:11.242134 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:39:11.242206 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:39:11.244221 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:39:11.249872 systemd-modules-load[191]: Inserted module 'dm_multipath' Oct 2 19:39:11.251769 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:39:11.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.266647 kernel: audit: type=1130 audit(1696275551.260:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.262914 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:39:11.277459 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:39:11.287632 kernel: audit: type=1130 audit(1696275551.279:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.315519 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:39:11.329528 kernel: iscsi: registered transport (tcp) Oct 2 19:39:11.353533 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:39:11.353618 kernel: QLogic iSCSI HBA Driver Oct 2 19:39:11.397934 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:39:11.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.403176 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:39:11.460542 kernel: raid6: avx2x4 gen() 18203 MB/s Oct 2 19:39:11.477525 kernel: raid6: avx2x4 xor() 8169 MB/s Oct 2 19:39:11.494527 kernel: raid6: avx2x2 gen() 18193 MB/s Oct 2 19:39:11.511527 kernel: raid6: avx2x2 xor() 18631 MB/s Oct 2 19:39:11.528524 kernel: raid6: avx2x1 gen() 14346 MB/s Oct 2 19:39:11.545526 kernel: raid6: avx2x1 xor() 16161 MB/s Oct 2 19:39:11.562525 kernel: raid6: sse2x4 gen() 11088 MB/s Oct 2 19:39:11.579524 kernel: raid6: sse2x4 xor() 6733 MB/s Oct 2 19:39:11.596527 kernel: raid6: sse2x2 gen() 12073 MB/s Oct 2 19:39:11.613525 kernel: raid6: sse2x2 xor() 7437 MB/s Oct 2 19:39:11.630530 kernel: raid6: sse2x1 gen() 10592 MB/s Oct 2 19:39:11.647985 kernel: raid6: sse2x1 xor() 5184 MB/s Oct 2 19:39:11.648029 kernel: raid6: using algorithm avx2x4 gen() 18203 MB/s Oct 2 19:39:11.648055 kernel: raid6: .... xor() 8169 MB/s, rmw enabled Oct 2 19:39:11.648688 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:39:11.663523 kernel: xor: automatically using best checksumming function avx Oct 2 19:39:11.768524 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:39:11.780751 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:39:11.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.780000 audit: BPF prog-id=7 op=LOAD Oct 2 19:39:11.780000 audit: BPF prog-id=8 op=LOAD Oct 2 19:39:11.782435 systemd[1]: Starting systemd-udevd.service... Oct 2 19:39:11.799261 systemd-udevd[388]: Using default interface naming scheme 'v252'. Oct 2 19:39:11.806732 systemd[1]: Started systemd-udevd.service. Oct 2 19:39:11.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.811920 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:39:11.834222 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Oct 2 19:39:11.873630 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:39:11.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.875019 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:39:11.938083 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:39:11.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.016507 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:39:12.047905 kernel: scsi host0: Virtio SCSI HBA Oct 2 19:39:12.064752 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:39:12.072516 kernel: AES CTR mode by8 optimization enabled Oct 2 19:39:12.096412 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Oct 2 19:39:12.153686 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Oct 2 19:39:12.153922 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Oct 2 19:39:12.154810 kernel: sd 0:0:1:0: [sda] Write Protect is off Oct 2 19:39:12.155041 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Oct 2 19:39:12.155234 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 2 19:39:12.165549 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:39:12.165621 kernel: GPT:17805311 != 25165823 Oct 2 19:39:12.165644 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:39:12.165665 kernel: GPT:17805311 != 25165823 Oct 2 19:39:12.166338 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:39:12.168161 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:39:12.170661 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Oct 2 19:39:12.219513 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (443) Oct 2 19:39:12.225370 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:39:12.234300 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:39:12.234517 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:39:12.246382 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:39:12.252547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:39:12.254785 systemd[1]: Starting disk-uuid.service... Oct 2 19:39:12.265191 disk-uuid[505]: Primary Header is updated. Oct 2 19:39:12.265191 disk-uuid[505]: Secondary Entries is updated. Oct 2 19:39:12.265191 disk-uuid[505]: Secondary Header is updated. Oct 2 19:39:12.277606 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:39:12.293524 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:39:13.300518 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:39:13.301205 disk-uuid[506]: The operation has completed successfully. Oct 2 19:39:13.366260 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:39:13.366414 systemd[1]: Finished disk-uuid.service. Oct 2 19:39:13.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.395660 systemd[1]: Starting verity-setup.service... Oct 2 19:39:13.423512 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:39:13.497940 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:39:13.500572 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:39:13.523091 systemd[1]: Finished verity-setup.service. Oct 2 19:39:13.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.609185 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:39:13.610147 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:39:13.610576 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:39:13.611526 systemd[1]: Starting ignition-setup.service... Oct 2 19:39:13.675342 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:39:13.675384 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:39:13.675407 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:39:13.675430 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 19:39:13.659713 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:39:13.694726 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:39:13.711803 systemd[1]: Finished ignition-setup.service. Oct 2 19:39:13.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.723211 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:39:13.776868 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:39:13.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.776000 audit: BPF prog-id=9 op=LOAD Oct 2 19:39:13.779140 systemd[1]: Starting systemd-networkd.service... Oct 2 19:39:13.812533 systemd-networkd[679]: lo: Link UP Oct 2 19:39:13.812545 systemd-networkd[679]: lo: Gained carrier Oct 2 19:39:13.813342 systemd-networkd[679]: Enumeration completed Oct 2 19:39:13.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.813765 systemd-networkd[679]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:39:13.814025 systemd[1]: Started systemd-networkd.service. Oct 2 19:39:13.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.815689 systemd-networkd[679]: eth0: Link UP Oct 2 19:39:13.901777 iscsid[687]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:39:13.901777 iscsid[687]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Oct 2 19:39:13.901777 iscsid[687]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:39:13.901777 iscsid[687]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:39:13.901777 iscsid[687]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:39:13.901777 iscsid[687]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:39:13.901777 iscsid[687]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:39:13.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.815697 systemd-networkd[679]: eth0: Gained carrier Oct 2 19:39:14.043509 ignition[623]: Ignition 2.14.0 Oct 2 19:39:13.828629 systemd-networkd[679]: eth0: DHCPv4 address 10.128.0.92/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 2 19:39:14.043539 ignition[623]: Stage: fetch-offline Oct 2 19:39:14.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.836344 systemd[1]: Reached target network.target. Oct 2 19:39:14.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.043634 ignition[623]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:39:13.846895 systemd[1]: Starting iscsiuio.service... Oct 2 19:39:14.043687 ignition[623]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:39:13.859794 systemd[1]: Started iscsiuio.service. Oct 2 19:39:14.070844 ignition[623]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:39:13.880007 systemd[1]: Starting iscsid.service... Oct 2 19:39:14.071046 ignition[623]: parsed url from cmdline: "" Oct 2 19:39:13.893790 systemd[1]: Started iscsid.service. Oct 2 19:39:14.071053 ignition[623]: no config URL provided Oct 2 19:39:13.910159 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:39:14.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.071154 ignition[623]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:39:13.929459 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:39:14.071171 ignition[623]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:39:13.985886 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:39:14.071183 ignition[623]: failed to fetch config: resource requires networking Oct 2 19:39:14.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.002647 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:39:14.071345 ignition[623]: Ignition finished successfully Oct 2 19:39:14.011598 systemd[1]: Reached target remote-fs.target. Oct 2 19:39:14.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.126723 ignition[703]: Ignition 2.14.0 Oct 2 19:39:14.020811 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:39:14.126733 ignition[703]: Stage: fetch Oct 2 19:39:14.077011 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:39:14.126891 ignition[703]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:39:14.097955 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:39:14.126921 ignition[703]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:39:14.114050 systemd[1]: Starting ignition-fetch.service... Oct 2 19:39:14.134336 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:39:14.165282 unknown[703]: fetched base config from "system" Oct 2 19:39:14.134552 ignition[703]: parsed url from cmdline: "" Oct 2 19:39:14.165303 unknown[703]: fetched base config from "system" Oct 2 19:39:14.134558 ignition[703]: no config URL provided Oct 2 19:39:14.165320 unknown[703]: fetched user config from "gcp" Oct 2 19:39:14.134565 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:39:14.176127 systemd[1]: Finished ignition-fetch.service. Oct 2 19:39:14.134576 ignition[703]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:39:14.191947 systemd[1]: Starting ignition-kargs.service... Oct 2 19:39:14.134614 ignition[703]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Oct 2 19:39:14.231232 systemd[1]: Finished ignition-kargs.service. Oct 2 19:39:14.143022 ignition[703]: GET result: OK Oct 2 19:39:14.240232 systemd[1]: Starting ignition-disks.service... Oct 2 19:39:14.143114 ignition[703]: parsing config with SHA512: 67b892b08847e6384a8a17be0dffb7cfcd61b5ef3087a73dc0a8f6afe7015d8a17cd1b4f9bd54e7a09a72676708cbfd10f13f6e4dfb881111cf715f3732a2250 Oct 2 19:39:14.264057 systemd[1]: Finished ignition-disks.service. Oct 2 19:39:14.166576 ignition[703]: fetch: fetch complete Oct 2 19:39:14.283966 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:39:14.166585 ignition[703]: fetch: fetch passed Oct 2 19:39:14.298674 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:39:14.166635 ignition[703]: Ignition finished successfully Oct 2 19:39:14.311683 systemd[1]: Reached target local-fs.target. Oct 2 19:39:14.205375 ignition[709]: Ignition 2.14.0 Oct 2 19:39:14.325681 systemd[1]: Reached target sysinit.target. Oct 2 19:39:14.205384 ignition[709]: Stage: kargs Oct 2 19:39:14.336673 systemd[1]: Reached target basic.target. Oct 2 19:39:14.205552 ignition[709]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:39:14.351936 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:39:14.205579 ignition[709]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:39:14.212062 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:39:14.213386 ignition[709]: kargs: kargs passed Oct 2 19:39:14.213436 ignition[709]: Ignition finished successfully Oct 2 19:39:14.251459 ignition[715]: Ignition 2.14.0 Oct 2 19:39:14.251471 ignition[715]: Stage: disks Oct 2 19:39:14.251652 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:39:14.251684 ignition[715]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:39:14.259388 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:39:14.260717 ignition[715]: disks: disks passed Oct 2 19:39:14.260766 ignition[715]: Ignition finished successfully Oct 2 19:39:14.390892 systemd-fsck[723]: ROOT: clean, 603/1628000 files, 124049/1617920 blocks Oct 2 19:39:14.587747 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:39:14.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.589289 systemd[1]: Mounting sysroot.mount... Oct 2 19:39:14.624779 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:39:14.618911 systemd[1]: Mounted sysroot.mount. Oct 2 19:39:14.631895 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:39:14.651111 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:39:14.669159 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:39:14.669245 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:39:14.669295 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:39:14.690420 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:39:14.717494 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:39:14.774686 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (729) Oct 2 19:39:14.774731 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:39:14.774757 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:39:14.774780 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:39:14.749846 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:39:14.790699 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 19:39:14.789082 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:39:14.807674 initrd-setup-root[734]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:39:14.818635 initrd-setup-root[758]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:39:14.828606 initrd-setup-root[766]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:39:14.838664 initrd-setup-root[776]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:39:14.861159 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:39:14.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.862633 systemd[1]: Starting ignition-mount.service... Oct 2 19:39:14.889815 systemd[1]: Starting sysroot-boot.service... Oct 2 19:39:14.897846 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:39:14.898007 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:39:14.925635 ignition[794]: INFO : Ignition 2.14.0 Oct 2 19:39:14.925635 ignition[794]: INFO : Stage: mount Oct 2 19:39:14.925635 ignition[794]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:39:14.925635 ignition[794]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:39:14.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.933372 systemd[1]: Finished sysroot-boot.service. Oct 2 19:39:14.996687 ignition[794]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:39:14.996687 ignition[794]: INFO : mount: mount passed Oct 2 19:39:14.996687 ignition[794]: INFO : Ignition finished successfully Oct 2 19:39:15.062794 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (804) Oct 2 19:39:15.062832 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:39:15.062847 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:39:15.062862 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:39:15.062877 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 19:39:14.940135 systemd[1]: Finished ignition-mount.service. Oct 2 19:39:14.958364 systemd[1]: Starting ignition-files.service... Oct 2 19:39:14.993983 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:39:15.096759 ignition[823]: INFO : Ignition 2.14.0 Oct 2 19:39:15.096759 ignition[823]: INFO : Stage: files Oct 2 19:39:15.096759 ignition[823]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:39:15.096759 ignition[823]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:39:15.152696 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (825) Oct 2 19:39:14.997699 systemd-networkd[679]: eth0: Gained IPv6LL Oct 2 19:39:15.160700 ignition[823]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:39:15.160700 ignition[823]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:39:15.160700 ignition[823]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:39:15.160700 ignition[823]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:39:15.160700 ignition[823]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:39:15.160700 ignition[823]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:39:15.160700 ignition[823]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:39:15.160700 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Oct 2 19:39:15.160700 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:39:15.160700 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2370245578" Oct 2 19:39:15.160700 ignition[823]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2370245578": device or resource busy Oct 2 19:39:15.160700 ignition[823]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2370245578", trying btrfs: device or resource busy Oct 2 19:39:15.160700 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2370245578" Oct 2 19:39:15.160700 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2370245578" Oct 2 19:39:15.160700 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem2370245578" Oct 2 19:39:15.160700 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem2370245578" Oct 2 19:39:15.160700 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Oct 2 19:39:15.160700 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:39:15.056938 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:39:15.433679 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Oct 2 19:39:15.433679 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Oct 2 19:39:15.124409 unknown[823]: wrote ssh authorized keys file for user: core Oct 2 19:39:15.681037 ignition[823]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Oct 2 19:39:15.681037 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:39:15.721666 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:39:15.721666 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Oct 2 19:39:15.822769 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Oct 2 19:39:15.940726 ignition[823]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1278900295" Oct 2 19:39:15.964649 ignition[823]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1278900295": device or resource busy Oct 2 19:39:15.964649 ignition[823]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1278900295", trying btrfs: device or resource busy Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1278900295" Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1278900295" Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem1278900295" Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem1278900295" Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:39:15.964649 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(d): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:39:16.185639 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(d): GET result: OK Oct 2 19:39:16.331866 ignition[823]: DEBUG : files: createFilesystemsFiles: createFiles: op(d): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Oct 2 19:39:16.355639 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:39:16.355639 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:39:16.355639 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:39:16.404677 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Oct 2 19:39:16.982214 ignition[823]: DEBUG : files: createFilesystemsFiles: createFiles: op(e): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2272390383" Oct 2 19:39:17.006665 ignition[823]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2272390383": device or resource busy Oct 2 19:39:17.006665 ignition[823]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2272390383", trying btrfs: device or resource busy Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2272390383" Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2272390383" Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem2272390383" Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem2272390383" Oct 2 19:39:17.006665 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Oct 2 19:39:17.443657 kernel: kauditd_printk_skb: 26 callbacks suppressed Oct 2 19:39:17.443711 kernel: audit: type=1130 audit(1696275557.050:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.443738 kernel: audit: type=1130 audit(1696275557.180:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.443771 kernel: audit: type=1130 audit(1696275557.222:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.443794 kernel: audit: type=1131 audit(1696275557.222:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.443809 kernel: audit: type=1130 audit(1696275557.337:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.443823 kernel: audit: type=1131 audit(1696275557.337:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.000896 systemd[1]: mnt-oem2272390383.mount: Deactivated successfully. Oct 2 19:39:17.458864 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Oct 2 19:39:17.458864 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(15): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:39:17.458864 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem195967573" Oct 2 19:39:17.458864 ignition[823]: CRITICAL : files: createFilesystemsFiles: createFiles: op(15): op(16): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem195967573": device or resource busy Oct 2 19:39:17.458864 ignition[823]: ERROR : files: createFilesystemsFiles: createFiles: op(15): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem195967573", trying btrfs: device or resource busy Oct 2 19:39:17.458864 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem195967573" Oct 2 19:39:17.458864 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem195967573" Oct 2 19:39:17.458864 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [started] unmounting "/mnt/oem195967573" Oct 2 19:39:17.458864 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [finished] unmounting "/mnt/oem195967573" Oct 2 19:39:17.458864 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Oct 2 19:39:17.458864 ignition[823]: INFO : files: op(19): [started] processing unit "oem-gce-enable-oslogin.service" Oct 2 19:39:17.458864 ignition[823]: INFO : files: op(19): [finished] processing unit "oem-gce-enable-oslogin.service" Oct 2 19:39:17.458864 ignition[823]: INFO : files: op(1a): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:39:17.458864 ignition[823]: INFO : files: op(1a): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:39:17.458864 ignition[823]: INFO : files: op(1b): [started] processing unit "oem-gce.service" Oct 2 19:39:17.458864 ignition[823]: INFO : files: op(1b): [finished] processing unit "oem-gce.service" Oct 2 19:39:17.458864 ignition[823]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:39:17.855788 kernel: audit: type=1130 audit(1696275557.465:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.855845 kernel: audit: type=1131 audit(1696275557.653:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.023002 systemd[1]: mnt-oem195967573.mount: Deactivated successfully. Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(20): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(21): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(21): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(22): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(22): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:39:17.871955 ignition[823]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:39:17.871955 ignition[823]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:39:17.871955 ignition[823]: INFO : files: files passed Oct 2 19:39:18.279841 kernel: audit: type=1131 audit(1696275557.929:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.279886 kernel: audit: type=1131 audit(1696275558.004:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.044012 systemd[1]: Finished ignition-files.service. Oct 2 19:39:18.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.294945 initrd-setup-root-after-ignition[846]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:39:18.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.317031 ignition[823]: INFO : Ignition finished successfully Oct 2 19:39:18.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.062295 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:39:18.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.097853 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:39:18.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.098980 systemd[1]: Starting ignition-quench.service... Oct 2 19:39:17.141079 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:39:18.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.182223 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:39:17.182399 systemd[1]: Finished ignition-quench.service. Oct 2 19:39:17.224121 systemd[1]: Reached target ignition-complete.target. Oct 2 19:39:18.451778 ignition[861]: INFO : Ignition 2.14.0 Oct 2 19:39:18.451778 ignition[861]: INFO : Stage: umount Oct 2 19:39:18.451778 ignition[861]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:39:18.451778 ignition[861]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:39:18.451778 ignition[861]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:39:18.451778 ignition[861]: INFO : umount: umount passed Oct 2 19:39:18.451778 ignition[861]: INFO : Ignition finished successfully Oct 2 19:39:17.291919 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:39:18.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.335517 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:39:18.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.335650 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:39:17.339080 systemd[1]: Reached target initrd-fs.target. Oct 2 19:39:17.420736 systemd[1]: Reached target initrd.target. Oct 2 19:39:17.420922 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:39:18.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.422134 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:39:18.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.450998 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:39:18.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.647000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:39:17.468218 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:39:17.514066 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:39:17.525974 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:39:18.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.572971 systemd[1]: Stopped target timers.target. Oct 2 19:39:18.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.617000 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:39:18.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.617196 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:39:17.655144 systemd[1]: Stopped target initrd.target. Oct 2 19:39:18.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.695804 systemd[1]: Stopped target basic.target. Oct 2 19:39:17.720864 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:39:17.741845 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:39:17.762866 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:39:18.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.782852 systemd[1]: Stopped target remote-fs.target. Oct 2 19:39:18.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.804867 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:39:18.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.823884 systemd[1]: Stopped target sysinit.target. Oct 2 19:39:17.842875 systemd[1]: Stopped target local-fs.target. Oct 2 19:39:17.863839 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:39:18.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.879855 systemd[1]: Stopped target swap.target. Oct 2 19:39:18.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.904836 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:39:18.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:18.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:17.905051 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:39:17.930997 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:39:17.964030 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:39:17.964241 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:39:18.006129 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:39:18.006365 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:39:18.984775 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Oct 2 19:39:18.984846 iscsid[687]: iscsid shutting down. Oct 2 19:39:18.053042 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:39:18.053224 systemd[1]: Stopped ignition-files.service. Oct 2 19:39:18.065452 systemd[1]: Stopping ignition-mount.service... Oct 2 19:39:18.113144 systemd[1]: Stopping iscsiuio.service... Oct 2 19:39:18.133656 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:39:18.133936 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:39:18.157262 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:39:18.199857 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:39:18.200162 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:39:18.213102 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:39:18.213292 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:39:18.256954 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:39:18.257799 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:39:18.257914 systemd[1]: Stopped iscsiuio.service. Oct 2 19:39:18.268443 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:39:18.268577 systemd[1]: Stopped ignition-mount.service. Oct 2 19:39:18.288345 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:39:18.288463 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:39:18.302321 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:39:18.302514 systemd[1]: Stopped ignition-disks.service. Oct 2 19:39:18.324773 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:39:18.324858 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:39:18.340806 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:39:18.340880 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:39:18.358792 systemd[1]: Stopped target network.target. Oct 2 19:39:18.383706 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:39:18.383819 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:39:18.398794 systemd[1]: Stopped target paths.target. Oct 2 19:39:18.412698 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:39:18.417598 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:39:18.429787 systemd[1]: Stopped target slices.target. Oct 2 19:39:18.443737 systemd[1]: Stopped target sockets.target. Oct 2 19:39:18.459810 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:39:18.459858 systemd[1]: Closed iscsid.socket. Oct 2 19:39:18.487831 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:39:18.487901 systemd[1]: Closed iscsiuio.socket. Oct 2 19:39:18.522811 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:39:18.522888 systemd[1]: Stopped ignition-setup.service. Oct 2 19:39:18.543848 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:39:18.543921 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:39:18.560046 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:39:18.564596 systemd-networkd[679]: eth0: DHCPv6 lease lost Oct 2 19:39:18.575937 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:39:18.591698 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:39:18.591825 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:39:18.612751 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:39:18.612885 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:39:18.631459 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:39:18.631598 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:39:18.648962 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:39:18.649006 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:39:18.663734 systemd[1]: Stopping network-cleanup.service... Oct 2 19:39:18.676681 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:39:18.676816 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:39:18.691830 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:39:18.691922 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:39:18.707959 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:39:18.708026 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:39:18.723003 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:39:18.739134 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:39:18.739817 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:39:18.739967 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:39:18.746412 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:39:18.746511 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:39:18.771757 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:39:18.771828 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:39:18.788685 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:39:18.788781 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:39:18.804888 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:39:18.804977 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:39:18.820776 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:39:18.820854 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:39:18.836780 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:39:18.859650 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:39:18.859871 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:39:18.875350 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:39:18.875475 systemd[1]: Stopped network-cleanup.service. Oct 2 19:39:18.890151 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:39:18.890266 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:39:18.905045 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:39:18.922820 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:39:18.943924 systemd[1]: Switching root. Oct 2 19:39:18.988810 systemd-journald[190]: Journal stopped Oct 2 19:39:23.786645 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:39:23.786779 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:39:23.786805 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:39:23.786828 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:39:23.786849 kernel: SELinux: policy capability open_perms=1 Oct 2 19:39:23.786871 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:39:23.786899 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:39:23.786993 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:39:23.787700 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:39:23.787731 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:39:23.787754 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:39:23.787779 systemd[1]: Successfully loaded SELinux policy in 113.945ms. Oct 2 19:39:23.787824 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.208ms. Oct 2 19:39:23.787850 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:39:23.787875 systemd[1]: Detected virtualization kvm. Oct 2 19:39:23.787898 systemd[1]: Detected architecture x86-64. Oct 2 19:39:23.787922 systemd[1]: Detected first boot. Oct 2 19:39:23.787950 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:39:23.787973 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:39:23.787999 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:39:23.788030 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:39:23.788056 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:39:23.788086 kernel: kauditd_printk_skb: 38 callbacks suppressed Oct 2 19:39:23.788111 kernel: audit: type=1334 audit(1696275562.847:85): prog-id=12 op=LOAD Oct 2 19:39:23.788133 kernel: audit: type=1334 audit(1696275562.847:86): prog-id=3 op=UNLOAD Oct 2 19:39:23.788156 kernel: audit: type=1334 audit(1696275562.859:87): prog-id=13 op=LOAD Oct 2 19:39:23.788185 kernel: audit: type=1334 audit(1696275562.873:88): prog-id=14 op=LOAD Oct 2 19:39:23.795680 kernel: audit: type=1334 audit(1696275562.873:89): prog-id=4 op=UNLOAD Oct 2 19:39:23.795715 kernel: audit: type=1334 audit(1696275562.873:90): prog-id=5 op=UNLOAD Oct 2 19:39:23.795734 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:39:23.795755 kernel: audit: type=1131 audit(1696275562.876:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.795779 systemd[1]: Stopped iscsid.service. Oct 2 19:39:23.795795 kernel: audit: type=1334 audit(1696275562.928:92): prog-id=12 op=UNLOAD Oct 2 19:39:23.795809 kernel: audit: type=1131 audit(1696275562.942:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.795824 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:39:23.795840 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:39:23.795855 kernel: audit: type=1130 audit(1696275562.985:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.795870 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:39:23.795893 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:39:23.795914 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:39:23.795936 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:39:23.795952 systemd[1]: Created slice system-getty.slice. Oct 2 19:39:23.795967 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:39:23.795981 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:39:23.795996 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:39:23.796011 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:39:23.796026 systemd[1]: Created slice user.slice. Oct 2 19:39:23.796047 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:39:23.796066 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:39:23.796081 systemd[1]: Set up automount boot.automount. Oct 2 19:39:23.796095 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:39:23.796128 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:39:23.796152 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:39:23.796176 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:39:23.796199 systemd[1]: Reached target integritysetup.target. Oct 2 19:39:23.796221 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:39:23.796249 systemd[1]: Reached target remote-fs.target. Oct 2 19:39:23.796273 systemd[1]: Reached target slices.target. Oct 2 19:39:23.796289 systemd[1]: Reached target swap.target. Oct 2 19:39:23.796303 systemd[1]: Reached target torcx.target. Oct 2 19:39:23.796321 systemd[1]: Reached target veritysetup.target. Oct 2 19:39:23.796336 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:39:23.796351 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:39:23.796366 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:39:23.796382 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:39:23.796396 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:39:23.796414 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:39:23.796429 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:39:23.796444 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:39:23.796461 systemd[1]: Mounting media.mount... Oct 2 19:39:23.796477 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:39:23.796620 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:39:23.796640 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:39:23.796655 systemd[1]: Mounting tmp.mount... Oct 2 19:39:23.796677 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:39:23.796697 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:39:23.796727 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:39:23.796751 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:39:23.796775 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:39:23.796798 systemd[1]: Starting modprobe@drm.service... Oct 2 19:39:23.796823 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:39:23.796847 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:39:23.796871 systemd[1]: Starting modprobe@loop.service... Oct 2 19:39:23.796892 kernel: fuse: init (API version 7.34) Oct 2 19:39:23.796920 kernel: loop: module loaded Oct 2 19:39:23.796943 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:39:23.796965 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:39:23.796986 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:39:23.797009 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:39:23.797030 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:39:23.797051 systemd[1]: Stopped systemd-journald.service. Oct 2 19:39:23.797074 systemd[1]: Starting systemd-journald.service... Oct 2 19:39:23.797095 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:39:23.797123 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:39:23.797153 systemd-journald[985]: Journal started Oct 2 19:39:23.797248 systemd-journald[985]: Runtime Journal (/run/log/journal/35b37c116bf4398c2ff6bc577b6f2bb1) is 8.0M, max 148.8M, 140.8M free. Oct 2 19:39:18.988000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:39:19.317000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:39:19.469000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:39:19.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:39:19.469000 audit: BPF prog-id=10 op=LOAD Oct 2 19:39:19.469000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:39:19.469000 audit: BPF prog-id=11 op=LOAD Oct 2 19:39:19.469000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:39:22.847000 audit: BPF prog-id=12 op=LOAD Oct 2 19:39:22.847000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:39:22.859000 audit: BPF prog-id=13 op=LOAD Oct 2 19:39:22.873000 audit: BPF prog-id=14 op=LOAD Oct 2 19:39:22.873000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:39:22.873000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:39:22.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:22.928000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:39:22.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:22.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:22.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.749000 audit: BPF prog-id=15 op=LOAD Oct 2 19:39:23.749000 audit: BPF prog-id=16 op=LOAD Oct 2 19:39:23.749000 audit: BPF prog-id=17 op=LOAD Oct 2 19:39:23.749000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:39:23.749000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:39:23.779000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:39:23.779000 audit[985]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdca0d7b00 a2=4000 a3=7ffdca0d7b9c items=0 ppid=1 pid=985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:23.779000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:39:22.845971 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:39:19.650804 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:39:22.877095 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:39:19.651904 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:39:19.651929 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:39:19.651969 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:39:19.651981 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:39:19.652023 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:39:19.652046 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:39:19.652298 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:39:19.652344 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:39:19.652360 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:39:19.653300 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:39:19.653343 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:39:19.653368 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:39:19.653385 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:39:19.653407 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:39:19.653424 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:39:22.243001 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:22Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:39:22.243325 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:22Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:39:22.243476 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:22Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:39:22.243729 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:22Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:39:22.243791 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:22Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:39:22.243871 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:39:22Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:39:23.809532 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:39:23.823581 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:39:23.843996 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:39:23.844117 systemd[1]: Stopped verity-setup.service. Oct 2 19:39:23.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.864752 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:39:23.873520 systemd[1]: Started systemd-journald.service. Oct 2 19:39:23.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.882892 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:39:23.889864 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:39:23.896842 systemd[1]: Mounted media.mount. Oct 2 19:39:23.903829 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:39:23.912837 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:39:23.922849 systemd[1]: Mounted tmp.mount. Oct 2 19:39:23.931000 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:39:23.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.940108 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:39:23.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.949166 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:39:23.949425 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:39:23.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.959214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:39:23.959440 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:39:23.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.969087 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:39:23.969305 systemd[1]: Finished modprobe@drm.service. Oct 2 19:39:23.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.978139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:39:23.978362 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:39:23.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.987073 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:39:23.987288 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:39:23.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.996072 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:39:23.996303 systemd[1]: Finished modprobe@loop.service. Oct 2 19:39:24.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.005062 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:39:24.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.014232 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:39:24.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.023043 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:39:24.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.032075 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:39:24.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.041330 systemd[1]: Reached target network-pre.target. Oct 2 19:39:24.052176 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:39:24.062117 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:39:24.069630 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:39:24.072262 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:39:24.081223 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:39:24.089920 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:39:24.091645 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:39:24.098691 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:39:24.100473 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:39:24.105035 systemd-journald[985]: Time spent on flushing to /var/log/journal/35b37c116bf4398c2ff6bc577b6f2bb1 is 56.164ms for 1142 entries. Oct 2 19:39:24.105035 systemd-journald[985]: System Journal (/var/log/journal/35b37c116bf4398c2ff6bc577b6f2bb1) is 8.0M, max 584.8M, 576.8M free. Oct 2 19:39:24.198185 systemd-journald[985]: Received client request to flush runtime journal. Oct 2 19:39:24.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.115803 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:39:24.125430 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:39:24.199850 udevadm[999]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:39:24.135990 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:39:24.144781 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:39:24.154009 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:39:24.166260 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:39:24.175257 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:39:24.184339 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:39:24.199442 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:39:24.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.781177 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:39:24.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.788000 audit: BPF prog-id=18 op=LOAD Oct 2 19:39:24.789000 audit: BPF prog-id=19 op=LOAD Oct 2 19:39:24.789000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:39:24.789000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:39:24.791536 systemd[1]: Starting systemd-udevd.service... Oct 2 19:39:24.814342 systemd-udevd[1002]: Using default interface naming scheme 'v252'. Oct 2 19:39:24.864481 systemd[1]: Started systemd-udevd.service. Oct 2 19:39:24.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.874000 audit: BPF prog-id=20 op=LOAD Oct 2 19:39:24.877129 systemd[1]: Starting systemd-networkd.service... Oct 2 19:39:24.891000 audit: BPF prog-id=21 op=LOAD Oct 2 19:39:24.891000 audit: BPF prog-id=22 op=LOAD Oct 2 19:39:24.891000 audit: BPF prog-id=23 op=LOAD Oct 2 19:39:24.894435 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:39:24.953512 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:39:24.956443 systemd[1]: Started systemd-userdbd.service. Oct 2 19:39:24.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.074534 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:39:25.095626 systemd-networkd[1016]: lo: Link UP Oct 2 19:39:25.095798 systemd-networkd[1016]: lo: Gained carrier Oct 2 19:39:25.096678 systemd-networkd[1016]: Enumeration completed Oct 2 19:39:25.096841 systemd[1]: Started systemd-networkd.service. Oct 2 19:39:25.097352 systemd-networkd[1016]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:39:25.099615 systemd-networkd[1016]: eth0: Link UP Oct 2 19:39:25.099920 systemd-networkd[1016]: eth0: Gained carrier Oct 2 19:39:25.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.113738 systemd-networkd[1016]: eth0: DHCPv4 address 10.128.0.92/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 2 19:39:25.144522 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:39:25.144633 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:39:25.137000 audit[1015]: AVC avc: denied { confidentiality } for pid=1015 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:39:25.187528 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Oct 2 19:39:25.187631 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1013) Oct 2 19:39:25.229592 kernel: ACPI: button: Sleep Button [SLPF] Oct 2 19:39:25.137000 audit[1015]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562b825f64c0 a1=32194 a2=7fb4389febc5 a3=5 items=106 ppid=1002 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:25.137000 audit: CWD cwd="/" Oct 2 19:39:25.137000 audit: PATH item=0 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=1 name=(null) inode=14574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=2 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=3 name=(null) inode=14575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=4 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=5 name=(null) inode=14576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=6 name=(null) inode=14576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=7 name=(null) inode=14577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=8 name=(null) inode=14576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=9 name=(null) inode=14578 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=10 name=(null) inode=14576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=11 name=(null) inode=14579 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=12 name=(null) inode=14576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=13 name=(null) inode=14580 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=14 name=(null) inode=14576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=15 name=(null) inode=14581 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=16 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=17 name=(null) inode=14582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=18 name=(null) inode=14582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=19 name=(null) inode=14583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=20 name=(null) inode=14582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=21 name=(null) inode=14584 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=22 name=(null) inode=14582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=23 name=(null) inode=14585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=24 name=(null) inode=14582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=25 name=(null) inode=14586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=26 name=(null) inode=14582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=27 name=(null) inode=14587 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=28 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=29 name=(null) inode=14588 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=30 name=(null) inode=14588 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=31 name=(null) inode=14589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=32 name=(null) inode=14588 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=33 name=(null) inode=14590 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=34 name=(null) inode=14588 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=35 name=(null) inode=14591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=36 name=(null) inode=14588 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=37 name=(null) inode=14592 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=38 name=(null) inode=14588 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=39 name=(null) inode=14593 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=40 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=41 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=42 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=43 name=(null) inode=14595 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=44 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=45 name=(null) inode=14596 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=46 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=47 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=48 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=49 name=(null) inode=14598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=50 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=51 name=(null) inode=14599 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=52 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=53 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=54 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=55 name=(null) inode=14601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=56 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=57 name=(null) inode=14602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=58 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=59 name=(null) inode=14603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=60 name=(null) inode=14603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=61 name=(null) inode=14604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=62 name=(null) inode=14603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=63 name=(null) inode=14605 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=64 name=(null) inode=14603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=65 name=(null) inode=14606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=66 name=(null) inode=14603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=67 name=(null) inode=14607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=68 name=(null) inode=14603 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=69 name=(null) inode=14608 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=70 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=71 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=72 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=73 name=(null) inode=14610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=74 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=75 name=(null) inode=14611 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=76 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=77 name=(null) inode=14612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=78 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=79 name=(null) inode=14613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=80 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=81 name=(null) inode=14614 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=82 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=83 name=(null) inode=14615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=84 name=(null) inode=14615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=85 name=(null) inode=14616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=86 name=(null) inode=14615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=87 name=(null) inode=14617 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=88 name=(null) inode=14615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=89 name=(null) inode=14618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=90 name=(null) inode=14615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=91 name=(null) inode=14619 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=92 name=(null) inode=14615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=93 name=(null) inode=14620 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=94 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=95 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=96 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=97 name=(null) inode=14622 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=98 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=99 name=(null) inode=14623 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=100 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=101 name=(null) inode=14624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=102 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=103 name=(null) inode=14625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=104 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PATH item=105 name=(null) inode=14626 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:25.137000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:39:25.252162 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Oct 2 19:39:25.251934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:39:25.275609 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 2 19:39:25.282538 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:39:25.299066 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:39:25.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.309374 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:39:25.341842 lvm[1039]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:39:25.371918 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:39:25.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.380862 systemd[1]: Reached target cryptsetup.target. Oct 2 19:39:25.391264 systemd[1]: Starting lvm2-activation.service... Oct 2 19:39:25.397520 lvm[1040]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:39:25.427910 systemd[1]: Finished lvm2-activation.service. Oct 2 19:39:25.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.436877 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:39:25.445676 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:39:25.445814 systemd[1]: Reached target local-fs.target. Oct 2 19:39:25.454677 systemd[1]: Reached target machines.target. Oct 2 19:39:25.464316 systemd[1]: Starting ldconfig.service... Oct 2 19:39:25.473531 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:39:25.473645 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:39:25.475394 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:39:25.484552 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:39:25.496396 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:39:25.496860 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:39:25.496958 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:39:25.499061 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:39:25.499895 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1042 (bootctl) Oct 2 19:39:25.502633 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:39:25.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.528199 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:39:25.554123 systemd-tmpfiles[1046]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:39:25.565465 systemd-tmpfiles[1046]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:39:25.580998 systemd-tmpfiles[1046]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:39:25.673713 systemd-fsck[1051]: fsck.fat 4.2 (2021-01-31) Oct 2 19:39:25.673713 systemd-fsck[1051]: /dev/sda1: 789 files, 115069/258078 clusters Oct 2 19:39:25.675016 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:39:25.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.686627 systemd[1]: Mounting boot.mount... Oct 2 19:39:25.741214 systemd[1]: Mounted boot.mount. Oct 2 19:39:25.839712 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:39:25.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.947792 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:39:25.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.958946 systemd[1]: Starting audit-rules.service... Oct 2 19:39:25.968569 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:39:25.979756 systemd[1]: Starting oem-gce-enable-oslogin.service... Oct 2 19:39:25.991575 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:39:26.000000 audit: BPF prog-id=24 op=LOAD Oct 2 19:39:26.002999 systemd[1]: Starting systemd-resolved.service... Oct 2 19:39:26.009000 audit: BPF prog-id=25 op=LOAD Oct 2 19:39:26.012071 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:39:26.020284 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:39:26.066000 audit[1067]: SYSTEM_BOOT pid=1067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:39:26.075417 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:39:26.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:26.084187 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:39:26.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:26.092809 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:39:26.100297 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Oct 2 19:39:26.100566 systemd[1]: Finished oem-gce-enable-oslogin.service. Oct 2 19:39:26.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:26.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:26.127372 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:39:26.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:26.152000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:39:26.152000 audit[1084]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe330f7fd0 a2=420 a3=0 items=0 ppid=1054 pid=1084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:26.152000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:39:26.154335 augenrules[1084]: No rules Oct 2 19:39:26.155302 systemd[1]: Finished audit-rules.service. Oct 2 19:39:26.203920 systemd-resolved[1065]: Positive Trust Anchors: Oct 2 19:39:26.204655 systemd-resolved[1065]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:39:26.204844 systemd-resolved[1065]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:39:26.224447 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:39:26.227376 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:39:26.232642 systemd-timesyncd[1066]: Contacted time server 169.254.169.254:123 (169.254.169.254). Oct 2 19:39:26.233195 systemd-timesyncd[1066]: Initial clock synchronization to Mon 2023-10-02 19:39:26.118422 UTC. Oct 2 19:39:26.235820 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:39:26.243723 systemd[1]: Reached target time-set.target. Oct 2 19:39:26.256224 systemd-resolved[1065]: Defaulting to hostname 'linux'. Oct 2 19:39:26.258886 systemd[1]: Started systemd-resolved.service. Oct 2 19:39:26.261642 systemd-networkd[1016]: eth0: Gained IPv6LL Oct 2 19:39:26.267728 systemd[1]: Reached target network.target. Oct 2 19:39:26.276664 systemd[1]: Reached target nss-lookup.target. Oct 2 19:39:26.379605 ldconfig[1041]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:39:26.385313 systemd[1]: Finished ldconfig.service. Oct 2 19:39:26.394372 systemd[1]: Starting systemd-update-done.service... Oct 2 19:39:26.403745 systemd[1]: Finished systemd-update-done.service. Oct 2 19:39:26.412806 systemd[1]: Reached target sysinit.target. Oct 2 19:39:26.422769 systemd[1]: Started motdgen.path. Oct 2 19:39:26.429739 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:39:26.439875 systemd[1]: Started logrotate.timer. Oct 2 19:39:26.446948 systemd[1]: Started mdadm.timer. Oct 2 19:39:26.454692 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:39:26.463686 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:39:26.463758 systemd[1]: Reached target paths.target. Oct 2 19:39:26.470658 systemd[1]: Reached target timers.target. Oct 2 19:39:26.478105 systemd[1]: Listening on dbus.socket. Oct 2 19:39:26.487078 systemd[1]: Starting docker.socket... Oct 2 19:39:26.497890 systemd[1]: Listening on sshd.socket. Oct 2 19:39:26.504755 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:39:26.505524 systemd[1]: Listening on docker.socket. Oct 2 19:39:26.512854 systemd[1]: Reached target sockets.target. Oct 2 19:39:26.521655 systemd[1]: Reached target basic.target. Oct 2 19:39:26.528687 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:39:26.528735 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:39:26.530389 systemd[1]: Starting containerd.service... Oct 2 19:39:26.538965 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:39:26.549458 systemd[1]: Starting dbus.service... Oct 2 19:39:26.557382 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:39:26.566391 systemd[1]: Starting extend-filesystems.service... Oct 2 19:39:26.571184 jq[1097]: false Oct 2 19:39:26.575641 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:39:26.577371 systemd[1]: Starting motdgen.service... Oct 2 19:39:26.586569 systemd[1]: Starting oem-gce.service... Oct 2 19:39:26.596226 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:39:26.606383 systemd[1]: Starting prepare-critools.service... Oct 2 19:39:26.617679 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:39:26.626580 systemd[1]: Starting sshd-keygen.service... Oct 2 19:39:26.637738 systemd[1]: Starting systemd-logind.service... Oct 2 19:39:26.644922 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:39:26.645034 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Oct 2 19:39:26.645807 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:39:26.647052 systemd[1]: Starting update-engine.service... Oct 2 19:39:26.656599 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:39:26.668285 jq[1120]: true Oct 2 19:39:26.671147 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:39:26.671475 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:39:26.672079 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:39:26.672345 systemd[1]: Finished motdgen.service. Oct 2 19:39:26.692898 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:39:26.693180 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:39:26.700456 tar[1124]: ./ Oct 2 19:39:26.700933 tar[1124]: ./loopback Oct 2 19:39:26.724213 mkfs.ext4[1129]: mke2fs 1.46.5 (30-Dec-2021) Oct 2 19:39:26.727739 jq[1127]: true Oct 2 19:39:26.730345 mkfs.ext4[1129]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Oct 2 19:39:26.730510 mkfs.ext4[1129]: Creating filesystem with 262144 4k blocks and 65536 inodes Oct 2 19:39:26.730510 mkfs.ext4[1129]: Filesystem UUID: 28cb4747-1dd8-4606-ba4d-11405e8cb3de Oct 2 19:39:26.730510 mkfs.ext4[1129]: Superblock backups stored on blocks: Oct 2 19:39:26.730510 mkfs.ext4[1129]: 32768, 98304, 163840, 229376 Oct 2 19:39:26.730510 mkfs.ext4[1129]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Oct 2 19:39:26.730739 mkfs.ext4[1129]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Oct 2 19:39:26.731504 mkfs.ext4[1129]: Creating journal (8192 blocks): done Oct 2 19:39:26.732399 extend-filesystems[1098]: Found sda Oct 2 19:39:26.739710 extend-filesystems[1098]: Found sda1 Oct 2 19:39:26.739710 extend-filesystems[1098]: Found sda2 Oct 2 19:39:26.739710 extend-filesystems[1098]: Found sda3 Oct 2 19:39:26.739710 extend-filesystems[1098]: Found usr Oct 2 19:39:26.739710 extend-filesystems[1098]: Found sda4 Oct 2 19:39:26.772044 mkfs.ext4[1129]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Oct 2 19:39:26.772149 extend-filesystems[1098]: Found sda6 Oct 2 19:39:26.772149 extend-filesystems[1098]: Found sda7 Oct 2 19:39:26.772149 extend-filesystems[1098]: Found sda9 Oct 2 19:39:26.772149 extend-filesystems[1098]: Checking size of /dev/sda9 Oct 2 19:39:26.801856 tar[1125]: crictl Oct 2 19:39:26.809729 umount[1142]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Oct 2 19:39:26.820886 extend-filesystems[1098]: Resized partition /dev/sda9 Oct 2 19:39:26.845522 kernel: loop0: detected capacity change from 0 to 2097152 Oct 2 19:39:26.854512 dbus-daemon[1096]: [system] SELinux support is enabled Oct 2 19:39:26.854809 systemd[1]: Started dbus.service. Oct 2 19:39:26.862200 dbus-daemon[1096]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1016 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:39:26.868290 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:39:26.868345 systemd[1]: Reached target system-config.target. Oct 2 19:39:26.869679 dbus-daemon[1096]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 19:39:26.875944 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:39:26.875991 systemd[1]: Reached target user-config.target. Oct 2 19:39:26.890187 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:39:26.902738 extend-filesystems[1160]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:39:26.920474 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Oct 2 19:39:26.920003 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:39:26.920831 bash[1158]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:39:26.954101 update_engine[1119]: I1002 19:39:26.953730 1119 main.cc:92] Flatcar Update Engine starting Oct 2 19:39:26.961043 systemd[1]: Started update-engine.service. Oct 2 19:39:26.961668 update_engine[1119]: I1002 19:39:26.961424 1119 update_check_scheduler.cc:74] Next update check in 5m2s Oct 2 19:39:26.973508 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Oct 2 19:39:26.996228 kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:39:26.994029 systemd[1]: Started locksmithd.service. Oct 2 19:39:26.996480 extend-filesystems[1160]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 2 19:39:26.996480 extend-filesystems[1160]: old_desc_blocks = 1, new_desc_blocks = 2 Oct 2 19:39:26.996480 extend-filesystems[1160]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Oct 2 19:39:27.040669 extend-filesystems[1098]: Resized filesystem in /dev/sda9 Oct 2 19:39:27.048694 tar[1124]: ./bandwidth Oct 2 19:39:27.001339 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:39:27.001650 systemd[1]: Finished extend-filesystems.service. Oct 2 19:39:27.068576 tar[1124]: ./ptp Oct 2 19:39:27.080181 systemd-logind[1117]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:39:27.080223 systemd-logind[1117]: Watching system buttons on /dev/input/event2 (Sleep Button) Oct 2 19:39:27.080253 systemd-logind[1117]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:39:27.090281 env[1128]: time="2023-10-02T19:39:27.090189148Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:39:27.098603 systemd-logind[1117]: New seat seat0. Oct 2 19:39:27.110413 systemd[1]: Started systemd-logind.service. Oct 2 19:39:27.143754 tar[1124]: ./vlan Oct 2 19:39:27.202190 tar[1124]: ./host-device Oct 2 19:39:27.259250 tar[1124]: ./tuning Oct 2 19:39:27.309810 tar[1124]: ./vrf Oct 2 19:39:27.346708 env[1128]: time="2023-10-02T19:39:27.346658960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:39:27.347118 env[1128]: time="2023-10-02T19:39:27.347088863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:27.350826 env[1128]: time="2023-10-02T19:39:27.350757763Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:39:27.350826 env[1128]: time="2023-10-02T19:39:27.350820089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:27.351231 env[1128]: time="2023-10-02T19:39:27.351192706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:39:27.351339 env[1128]: time="2023-10-02T19:39:27.351230968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:27.351339 env[1128]: time="2023-10-02T19:39:27.351252424Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:39:27.351339 env[1128]: time="2023-10-02T19:39:27.351269380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:27.351536 env[1128]: time="2023-10-02T19:39:27.351410776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:27.351838 env[1128]: time="2023-10-02T19:39:27.351803855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:27.352133 env[1128]: time="2023-10-02T19:39:27.352080325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:39:27.352133 env[1128]: time="2023-10-02T19:39:27.352115469Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:39:27.352265 env[1128]: time="2023-10-02T19:39:27.352200088Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:39:27.352265 env[1128]: time="2023-10-02T19:39:27.352220762Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:39:27.362601 env[1128]: time="2023-10-02T19:39:27.362544652Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:39:27.362745 env[1128]: time="2023-10-02T19:39:27.362635721Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:39:27.362745 env[1128]: time="2023-10-02T19:39:27.362662770Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:39:27.362745 env[1128]: time="2023-10-02T19:39:27.362731619Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.362915 env[1128]: time="2023-10-02T19:39:27.362755058Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.362915 env[1128]: time="2023-10-02T19:39:27.362835781Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.362915 env[1128]: time="2023-10-02T19:39:27.362860451Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.362915 env[1128]: time="2023-10-02T19:39:27.362883629Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.362915 env[1128]: time="2023-10-02T19:39:27.362906316Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.363140 env[1128]: time="2023-10-02T19:39:27.362930193Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.363140 env[1128]: time="2023-10-02T19:39:27.362962023Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.363140 env[1128]: time="2023-10-02T19:39:27.362988096Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:39:27.363290 env[1128]: time="2023-10-02T19:39:27.363142653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:39:27.363290 env[1128]: time="2023-10-02T19:39:27.363273018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:39:27.364102 env[1128]: time="2023-10-02T19:39:27.363834016Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.364102 env[1128]: time="2023-10-02T19:39:27.363888756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.364102 env[1128]: time="2023-10-02T19:39:27.363915624Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:39:27.364568 env[1128]: time="2023-10-02T19:39:27.364534165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.364726 env[1128]: time="2023-10-02T19:39:27.364647110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.364726 env[1128]: time="2023-10-02T19:39:27.364688699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.364726 env[1128]: time="2023-10-02T19:39:27.364710805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.364890 env[1128]: time="2023-10-02T19:39:27.364732033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.364890 env[1128]: time="2023-10-02T19:39:27.364754639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.364890 env[1128]: time="2023-10-02T19:39:27.364774884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.364890 env[1128]: time="2023-10-02T19:39:27.364794786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.364890 env[1128]: time="2023-10-02T19:39:27.364818029Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:39:27.365134 env[1128]: time="2023-10-02T19:39:27.364990805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.365134 env[1128]: time="2023-10-02T19:39:27.365028600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.365134 env[1128]: time="2023-10-02T19:39:27.365052868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.365134 env[1128]: time="2023-10-02T19:39:27.365073718Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:39:27.365134 env[1128]: time="2023-10-02T19:39:27.365099215Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:39:27.365134 env[1128]: time="2023-10-02T19:39:27.365119047Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:39:27.365412 env[1128]: time="2023-10-02T19:39:27.365149004Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:39:27.365412 env[1128]: time="2023-10-02T19:39:27.365204345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.367681 env[1128]: time="2023-10-02T19:39:27.365536807Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:39:27.367681 env[1128]: time="2023-10-02T19:39:27.365652805Z" level=info msg="Connect containerd service" Oct 2 19:39:27.367681 env[1128]: time="2023-10-02T19:39:27.365706863Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:39:27.373781 env[1128]: time="2023-10-02T19:39:27.373736702Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:39:27.376007 coreos-metadata[1095]: Oct 02 19:39:27.375 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Oct 2 19:39:27.385842 coreos-metadata[1095]: Oct 02 19:39:27.385 INFO Fetch failed with 404: resource not found Oct 2 19:39:27.386107 coreos-metadata[1095]: Oct 02 19:39:27.385 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Oct 2 19:39:27.387091 coreos-metadata[1095]: Oct 02 19:39:27.386 INFO Fetch successful Oct 2 19:39:27.387294 coreos-metadata[1095]: Oct 02 19:39:27.387 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Oct 2 19:39:27.388118 coreos-metadata[1095]: Oct 02 19:39:27.388 INFO Fetch failed with 404: resource not found Oct 2 19:39:27.388232 coreos-metadata[1095]: Oct 02 19:39:27.388 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Oct 2 19:39:27.388947 coreos-metadata[1095]: Oct 02 19:39:27.388 INFO Fetch failed with 404: resource not found Oct 2 19:39:27.389210 coreos-metadata[1095]: Oct 02 19:39:27.389 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Oct 2 19:39:27.389435 env[1128]: time="2023-10-02T19:39:27.388470788Z" level=info msg="Start subscribing containerd event" Oct 2 19:39:27.390345 coreos-metadata[1095]: Oct 02 19:39:27.390 INFO Fetch successful Oct 2 19:39:27.392733 unknown[1095]: wrote ssh authorized keys file for user: core Oct 2 19:39:27.408455 env[1128]: time="2023-10-02T19:39:27.408416130Z" level=info msg="Start recovering state" Oct 2 19:39:27.408867 env[1128]: time="2023-10-02T19:39:27.408834661Z" level=info msg="Start event monitor" Oct 2 19:39:27.408997 env[1128]: time="2023-10-02T19:39:27.408975797Z" level=info msg="Start snapshots syncer" Oct 2 19:39:27.409116 env[1128]: time="2023-10-02T19:39:27.409091312Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:39:27.409207 env[1128]: time="2023-10-02T19:39:27.409188627Z" level=info msg="Start streaming server" Oct 2 19:39:27.409563 env[1128]: time="2023-10-02T19:39:27.408353959Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:39:27.409772 env[1128]: time="2023-10-02T19:39:27.409750936Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:39:27.410737 systemd[1]: Started containerd.service. Oct 2 19:39:27.411210 env[1128]: time="2023-10-02T19:39:27.410993037Z" level=info msg="containerd successfully booted in 0.321917s" Oct 2 19:39:27.439916 update-ssh-keys[1173]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:39:27.440844 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:39:27.469704 dbus-daemon[1096]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:39:27.469931 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:39:27.470909 dbus-daemon[1096]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1161 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:39:27.477208 tar[1124]: ./sbr Oct 2 19:39:27.483570 systemd[1]: Starting polkit.service... Oct 2 19:39:27.593466 polkitd[1174]: Started polkitd version 121 Oct 2 19:39:27.628936 polkitd[1174]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:39:27.630609 polkitd[1174]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:39:27.639713 polkitd[1174]: Finished loading, compiling and executing 2 rules Oct 2 19:39:27.640991 dbus-daemon[1096]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:39:27.641206 systemd[1]: Started polkit.service. Oct 2 19:39:27.642086 polkitd[1174]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:39:27.690059 systemd-hostnamed[1161]: Hostname set to (transient) Oct 2 19:39:27.694009 systemd-resolved[1065]: System hostname changed to 'ci-3510-3-0-78e31b6159f13b617250.c.flatcar-212911.internal'. Oct 2 19:39:27.708262 tar[1124]: ./tap Oct 2 19:39:27.862705 tar[1124]: ./dhcp Oct 2 19:39:28.020641 tar[1124]: ./static Oct 2 19:39:28.066118 tar[1124]: ./firewall Oct 2 19:39:28.149754 tar[1124]: ./macvlan Oct 2 19:39:28.256843 tar[1124]: ./dummy Oct 2 19:39:28.360101 tar[1124]: ./bridge Oct 2 19:39:28.489130 tar[1124]: ./ipvlan Oct 2 19:39:28.604386 tar[1124]: ./portmap Oct 2 19:39:28.715589 tar[1124]: ./host-local Oct 2 19:39:28.847043 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:39:28.967074 systemd[1]: Finished prepare-critools.service. Oct 2 19:39:30.090845 sshd_keygen[1122]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:39:30.135992 systemd[1]: Finished sshd-keygen.service. Oct 2 19:39:30.145112 systemd[1]: Starting issuegen.service... Oct 2 19:39:30.155204 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:39:30.155456 systemd[1]: Finished issuegen.service. Oct 2 19:39:30.165705 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:39:30.182176 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:39:30.194031 systemd[1]: Started getty@tty1.service. Oct 2 19:39:30.203063 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:39:30.212046 systemd[1]: Reached target getty.target. Oct 2 19:39:30.237555 locksmithd[1165]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:39:32.186443 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Oct 2 19:39:34.388519 kernel: loop0: detected capacity change from 0 to 2097152 Oct 2 19:39:34.404493 systemd-nspawn[1203]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Oct 2 19:39:34.404493 systemd-nspawn[1203]: Press ^] three times within 1s to kill container. Oct 2 19:39:34.418548 kernel: EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:39:34.500088 systemd[1]: Started oem-gce.service. Oct 2 19:39:34.508160 systemd[1]: Reached target multi-user.target. Oct 2 19:39:34.519710 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:39:34.532868 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:39:34.533104 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:39:34.543875 systemd[1]: Startup finished in 996ms (kernel) + 8.384s (initrd) + 15.358s (userspace) = 24.739s. Oct 2 19:39:34.573504 systemd-nspawn[1203]: + '[' -e /etc/default/instance_configs.cfg.template ']' Oct 2 19:39:34.573504 systemd-nspawn[1203]: + echo -e '[InstanceSetup]\nset_host_keys = false' Oct 2 19:39:34.573848 systemd-nspawn[1203]: + /usr/bin/google_instance_setup Oct 2 19:39:35.159443 instance-setup[1209]: INFO Running google_set_multiqueue. Oct 2 19:39:35.175350 instance-setup[1209]: INFO Set channels for eth0 to 2. Oct 2 19:39:35.179130 instance-setup[1209]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Oct 2 19:39:35.180600 instance-setup[1209]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Oct 2 19:39:35.181059 instance-setup[1209]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Oct 2 19:39:35.182617 instance-setup[1209]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Oct 2 19:39:35.182969 instance-setup[1209]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Oct 2 19:39:35.184335 instance-setup[1209]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Oct 2 19:39:35.184810 instance-setup[1209]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Oct 2 19:39:35.186267 instance-setup[1209]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Oct 2 19:39:35.197356 instance-setup[1209]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Oct 2 19:39:35.197728 instance-setup[1209]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Oct 2 19:39:35.237948 systemd-nspawn[1203]: + /usr/bin/google_metadata_script_runner --script-type startup Oct 2 19:39:35.474345 systemd[1]: Created slice system-sshd.slice. Oct 2 19:39:35.476399 systemd[1]: Started sshd@0-10.128.0.92:22-147.75.109.163:46826.service. Oct 2 19:39:35.582300 startup-script[1240]: INFO Starting startup scripts. Oct 2 19:39:35.595264 startup-script[1240]: INFO No startup scripts found in metadata. Oct 2 19:39:35.595428 startup-script[1240]: INFO Finished running startup scripts. Oct 2 19:39:35.632071 systemd-nspawn[1203]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Oct 2 19:39:35.632071 systemd-nspawn[1203]: + daemon_pids=() Oct 2 19:39:35.632757 systemd-nspawn[1203]: + for d in accounts clock_skew network Oct 2 19:39:35.632757 systemd-nspawn[1203]: + daemon_pids+=($!) Oct 2 19:39:35.632757 systemd-nspawn[1203]: + for d in accounts clock_skew network Oct 2 19:39:35.632911 systemd-nspawn[1203]: + daemon_pids+=($!) Oct 2 19:39:35.633825 systemd-nspawn[1203]: + for d in accounts clock_skew network Oct 2 19:39:35.633825 systemd-nspawn[1203]: + /usr/bin/google_clock_skew_daemon Oct 2 19:39:35.633825 systemd-nspawn[1203]: + daemon_pids+=($!) Oct 2 19:39:35.633825 systemd-nspawn[1203]: + NOTIFY_SOCKET=/run/systemd/notify Oct 2 19:39:35.633825 systemd-nspawn[1203]: + /usr/bin/systemd-notify --ready Oct 2 19:39:35.633825 systemd-nspawn[1203]: + /usr/bin/google_accounts_daemon Oct 2 19:39:35.634534 systemd-nspawn[1203]: + /usr/bin/google_network_daemon Oct 2 19:39:35.683150 systemd-nspawn[1203]: + wait -n 36 37 38 Oct 2 19:39:35.823195 sshd[1243]: Accepted publickey for core from 147.75.109.163 port 46826 ssh2: RSA SHA256:jERpPwUHWOcAB2iqPY1kqY7anFsHOqCrZti6FYmxuZo Oct 2 19:39:35.827136 sshd[1243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:35.847805 systemd[1]: Created slice user-500.slice. Oct 2 19:39:35.849849 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:39:35.858321 systemd-logind[1117]: New session 1 of user core. Oct 2 19:39:35.867443 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:39:35.872340 systemd[1]: Starting user@500.service... Oct 2 19:39:35.912545 (systemd)[1251]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:36.120814 systemd[1251]: Queued start job for default target default.target. Oct 2 19:39:36.121693 systemd[1251]: Reached target paths.target. Oct 2 19:39:36.121727 systemd[1251]: Reached target sockets.target. Oct 2 19:39:36.121751 systemd[1251]: Reached target timers.target. Oct 2 19:39:36.121775 systemd[1251]: Reached target basic.target. Oct 2 19:39:36.121851 systemd[1251]: Reached target default.target. Oct 2 19:39:36.121914 systemd[1251]: Startup finished in 180ms. Oct 2 19:39:36.122054 systemd[1]: Started user@500.service. Oct 2 19:39:36.123723 systemd[1]: Started session-1.scope. Oct 2 19:39:36.351380 systemd[1]: Started sshd@1-10.128.0.92:22-147.75.109.163:46828.service. Oct 2 19:39:36.564967 google-networking[1248]: INFO Starting Google Networking daemon. Oct 2 19:39:36.589153 google-clock-skew[1247]: INFO Starting Google Clock Skew daemon. Oct 2 19:39:36.607533 google-clock-skew[1247]: INFO Clock drift token has changed: 0. Oct 2 19:39:36.613436 systemd-nspawn[1203]: hwclock: Cannot access the Hardware Clock via any known method. Oct 2 19:39:36.613436 systemd-nspawn[1203]: hwclock: Use the --verbose option to see the details of our search for an access method. Oct 2 19:39:36.614385 google-clock-skew[1247]: WARNING Failed to sync system time with hardware clock. Oct 2 19:39:36.630809 groupadd[1270]: group added to /etc/group: name=google-sudoers, GID=1000 Oct 2 19:39:36.635113 groupadd[1270]: group added to /etc/gshadow: name=google-sudoers Oct 2 19:39:36.639213 groupadd[1270]: new group: name=google-sudoers, GID=1000 Oct 2 19:39:36.651625 google-accounts[1246]: INFO Starting Google Accounts daemon. Oct 2 19:39:36.662048 sshd[1262]: Accepted publickey for core from 147.75.109.163 port 46828 ssh2: RSA SHA256:jERpPwUHWOcAB2iqPY1kqY7anFsHOqCrZti6FYmxuZo Oct 2 19:39:36.663476 sshd[1262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:36.671860 systemd[1]: Started session-2.scope. Oct 2 19:39:36.673564 systemd-logind[1117]: New session 2 of user core. Oct 2 19:39:36.686037 google-accounts[1246]: WARNING OS Login not installed. Oct 2 19:39:36.687159 google-accounts[1246]: INFO Creating a new user account for 0. Oct 2 19:39:36.692193 systemd-nspawn[1203]: useradd: invalid user name '0': use --badname to ignore Oct 2 19:39:36.693173 google-accounts[1246]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Oct 2 19:39:36.877076 sshd[1262]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:36.881839 systemd[1]: sshd@1-10.128.0.92:22-147.75.109.163:46828.service: Deactivated successfully. Oct 2 19:39:36.882952 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:39:36.883841 systemd-logind[1117]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:39:36.885106 systemd-logind[1117]: Removed session 2. Oct 2 19:39:36.923404 systemd[1]: Started sshd@2-10.128.0.92:22-147.75.109.163:46830.service. Oct 2 19:39:37.215160 sshd[1284]: Accepted publickey for core from 147.75.109.163 port 46830 ssh2: RSA SHA256:jERpPwUHWOcAB2iqPY1kqY7anFsHOqCrZti6FYmxuZo Oct 2 19:39:37.217065 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:37.223855 systemd[1]: Started session-3.scope. Oct 2 19:39:37.224681 systemd-logind[1117]: New session 3 of user core. Oct 2 19:39:37.423810 sshd[1284]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:37.428075 systemd[1]: sshd@2-10.128.0.92:22-147.75.109.163:46830.service: Deactivated successfully. Oct 2 19:39:37.429174 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:39:37.430052 systemd-logind[1117]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:39:37.432150 systemd-logind[1117]: Removed session 3. Oct 2 19:39:37.469926 systemd[1]: Started sshd@3-10.128.0.92:22-147.75.109.163:46846.service. Oct 2 19:39:37.763023 sshd[1291]: Accepted publickey for core from 147.75.109.163 port 46846 ssh2: RSA SHA256:jERpPwUHWOcAB2iqPY1kqY7anFsHOqCrZti6FYmxuZo Oct 2 19:39:37.764906 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:37.771046 systemd-logind[1117]: New session 4 of user core. Oct 2 19:39:37.771821 systemd[1]: Started session-4.scope. Oct 2 19:39:37.978694 sshd[1291]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:37.982886 systemd[1]: sshd@3-10.128.0.92:22-147.75.109.163:46846.service: Deactivated successfully. Oct 2 19:39:37.983977 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:39:37.984930 systemd-logind[1117]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:39:37.986205 systemd-logind[1117]: Removed session 4. Oct 2 19:39:38.024597 systemd[1]: Started sshd@4-10.128.0.92:22-147.75.109.163:46862.service. Oct 2 19:39:38.312716 sshd[1297]: Accepted publickey for core from 147.75.109.163 port 46862 ssh2: RSA SHA256:jERpPwUHWOcAB2iqPY1kqY7anFsHOqCrZti6FYmxuZo Oct 2 19:39:38.314721 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:38.321329 systemd[1]: Started session-5.scope. Oct 2 19:39:38.321939 systemd-logind[1117]: New session 5 of user core. Oct 2 19:39:38.513232 sudo[1300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:39:38.513662 sudo[1300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:38.523746 dbus-daemon[1096]: \xd0-\xd0|QV: received setenforce notice (enforcing=1167647008) Oct 2 19:39:38.525942 sudo[1300]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:38.570457 sshd[1297]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:38.575419 systemd[1]: sshd@4-10.128.0.92:22-147.75.109.163:46862.service: Deactivated successfully. Oct 2 19:39:38.576805 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:39:38.577813 systemd-logind[1117]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:39:38.579236 systemd-logind[1117]: Removed session 5. Oct 2 19:39:38.616353 systemd[1]: Started sshd@5-10.128.0.92:22-147.75.109.163:46872.service. Oct 2 19:39:38.905814 sshd[1304]: Accepted publickey for core from 147.75.109.163 port 46872 ssh2: RSA SHA256:jERpPwUHWOcAB2iqPY1kqY7anFsHOqCrZti6FYmxuZo Oct 2 19:39:38.907386 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:38.914059 systemd[1]: Started session-6.scope. Oct 2 19:39:38.914691 systemd-logind[1117]: New session 6 of user core. Oct 2 19:39:39.081575 sudo[1308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:39:39.081955 sudo[1308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:39.086185 sudo[1308]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:39.098167 sudo[1307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:39:39.098566 sudo[1307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:39.111311 systemd[1]: Stopping audit-rules.service... Oct 2 19:39:39.111000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:39:39.114745 auditctl[1311]: No rules Oct 2 19:39:39.118926 kernel: kauditd_printk_skb: 177 callbacks suppressed Oct 2 19:39:39.119011 kernel: audit: type=1305 audit(1696275579.111:159): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:39:39.119459 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:39:39.119728 systemd[1]: Stopped audit-rules.service. Oct 2 19:39:39.122537 systemd[1]: Starting audit-rules.service... Oct 2 19:39:39.111000 audit[1311]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffddac24b80 a2=420 a3=0 items=0 ppid=1 pid=1311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:39.159022 augenrules[1328]: No rules Oct 2 19:39:39.160323 systemd[1]: Finished audit-rules.service. Oct 2 19:39:39.166463 kernel: audit: type=1300 audit(1696275579.111:159): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffddac24b80 a2=420 a3=0 items=0 ppid=1 pid=1311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:39.166662 kernel: audit: type=1327 audit(1696275579.111:159): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:39:39.111000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:39:39.167730 sudo[1307]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:39.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.175523 kernel: audit: type=1131 audit(1696275579.118:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.212699 sshd[1304]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:39.219753 kernel: audit: type=1130 audit(1696275579.159:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.219853 kernel: audit: type=1106 audit(1696275579.166:162): pid=1307 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.166000 audit[1307]: USER_END pid=1307 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.218054 systemd[1]: sshd@5-10.128.0.92:22-147.75.109.163:46872.service: Deactivated successfully. Oct 2 19:39:39.219154 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:39:39.221188 systemd-logind[1117]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:39:39.222989 systemd-logind[1117]: Removed session 6. Oct 2 19:39:39.166000 audit[1307]: CRED_DISP pid=1307 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.259078 systemd[1]: Started sshd@6-10.128.0.92:22-147.75.109.163:46884.service. Oct 2 19:39:39.267119 kernel: audit: type=1104 audit(1696275579.166:163): pid=1307 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.267242 kernel: audit: type=1106 audit(1696275579.212:164): pid=1304 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:39.212000 audit[1304]: USER_END pid=1304 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:39.212000 audit[1304]: CRED_DISP pid=1304 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:39.327967 kernel: audit: type=1104 audit(1696275579.212:165): pid=1304 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:39.328133 kernel: audit: type=1131 audit(1696275579.217:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.92:22-147.75.109.163:46872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.92:22-147.75.109.163:46872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.92:22-147.75.109.163:46884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.557000 audit[1334]: USER_ACCT pid=1334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:39.558811 sshd[1334]: Accepted publickey for core from 147.75.109.163 port 46884 ssh2: RSA SHA256:jERpPwUHWOcAB2iqPY1kqY7anFsHOqCrZti6FYmxuZo Oct 2 19:39:39.559000 audit[1334]: CRED_ACQ pid=1334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:39.559000 audit[1334]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4811e3a0 a2=3 a3=0 items=0 ppid=1 pid=1334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:39.559000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:39:39.560872 sshd[1334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:39.567556 systemd[1]: Started session-7.scope. Oct 2 19:39:39.568452 systemd-logind[1117]: New session 7 of user core. Oct 2 19:39:39.576000 audit[1334]: USER_START pid=1334 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:39.579000 audit[1336]: CRED_ACQ pid=1336 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:39.734000 audit[1337]: USER_ACCT pid=1337 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.736120 sudo[1337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:39:39.734000 audit[1337]: CRED_REFR pid=1337 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:39.736585 sudo[1337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:39.737000 audit[1337]: USER_START pid=1337 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:40.326720 systemd[1]: Reloading. Oct 2 19:39:40.438390 /usr/lib/systemd/system-generators/torcx-generator[1366]: time="2023-10-02T19:39:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:39:40.439052 /usr/lib/systemd/system-generators/torcx-generator[1366]: time="2023-10-02T19:39:40Z" level=info msg="torcx already run" Oct 2 19:39:40.545784 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:39:40.545812 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:39:40.569012 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:39:40.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.654000 audit: BPF prog-id=34 op=LOAD Oct 2 19:39:40.654000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit: BPF prog-id=35 op=LOAD Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.655000 audit: BPF prog-id=36 op=LOAD Oct 2 19:39:40.655000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:39:40.655000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:39:40.658000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.658000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.658000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.658000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.658000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.658000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.658000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.658000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.658000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit: BPF prog-id=37 op=LOAD Oct 2 19:39:40.660000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit: BPF prog-id=38 op=LOAD Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.660000 audit: BPF prog-id=39 op=LOAD Oct 2 19:39:40.660000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:39:40.660000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit: BPF prog-id=40 op=LOAD Oct 2 19:39:40.661000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit: BPF prog-id=41 op=LOAD Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.661000 audit: BPF prog-id=42 op=LOAD Oct 2 19:39:40.661000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:39:40.661000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:39:40.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.664000 audit: BPF prog-id=43 op=LOAD Oct 2 19:39:40.664000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:39:40.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.665000 audit: BPF prog-id=44 op=LOAD Oct 2 19:39:40.665000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:39:40.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.668000 audit: BPF prog-id=45 op=LOAD Oct 2 19:39:40.668000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit: BPF prog-id=46 op=LOAD Oct 2 19:39:40.697000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit: BPF prog-id=47 op=LOAD Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.697000 audit: BPF prog-id=48 op=LOAD Oct 2 19:39:40.697000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:39:40.697000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit: BPF prog-id=49 op=LOAD Oct 2 19:39:40.698000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit: BPF prog-id=50 op=LOAD Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.699000 audit: BPF prog-id=51 op=LOAD Oct 2 19:39:40.699000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:39:40.699000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:39:40.716750 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:39:40.725674 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:39:40.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:40.726582 systemd[1]: Reached target network-online.target. Oct 2 19:39:40.728992 systemd[1]: Started kubelet.service. Oct 2 19:39:40.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:40.747320 systemd[1]: Starting coreos-metadata.service... Oct 2 19:39:40.827950 coreos-metadata[1419]: Oct 02 19:39:40.827 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Oct 2 19:39:40.830093 coreos-metadata[1419]: Oct 02 19:39:40.829 INFO Fetch successful Oct 2 19:39:40.830349 coreos-metadata[1419]: Oct 02 19:39:40.830 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Oct 2 19:39:40.831407 coreos-metadata[1419]: Oct 02 19:39:40.831 INFO Fetch successful Oct 2 19:39:40.831659 coreos-metadata[1419]: Oct 02 19:39:40.831 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Oct 2 19:39:40.832610 coreos-metadata[1419]: Oct 02 19:39:40.832 INFO Fetch successful Oct 2 19:39:40.832836 coreos-metadata[1419]: Oct 02 19:39:40.832 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Oct 2 19:39:40.833727 coreos-metadata[1419]: Oct 02 19:39:40.833 INFO Fetch successful Oct 2 19:39:40.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:40.848604 systemd[1]: Finished coreos-metadata.service. Oct 2 19:39:40.860475 kubelet[1411]: E1002 19:39:40.857639 1411 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 2 19:39:40.860517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:39:40.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:39:40.860742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:39:41.297664 systemd[1]: Stopped kubelet.service. Oct 2 19:39:41.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:41.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:41.322822 systemd[1]: Reloading. Oct 2 19:39:41.435046 /usr/lib/systemd/system-generators/torcx-generator[1477]: time="2023-10-02T19:39:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:39:41.435563 /usr/lib/systemd/system-generators/torcx-generator[1477]: time="2023-10-02T19:39:41Z" level=info msg="torcx already run" Oct 2 19:39:41.537244 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:39:41.537270 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:39:41.560641 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.650000 audit: BPF prog-id=52 op=LOAD Oct 2 19:39:41.650000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:39:41.650000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit: BPF prog-id=53 op=LOAD Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.651000 audit: BPF prog-id=54 op=LOAD Oct 2 19:39:41.651000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:39:41.651000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit: BPF prog-id=55 op=LOAD Oct 2 19:39:41.654000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit: BPF prog-id=56 op=LOAD Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.655000 audit: BPF prog-id=57 op=LOAD Oct 2 19:39:41.655000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:39:41.655000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:39:41.655000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit: BPF prog-id=58 op=LOAD Oct 2 19:39:41.656000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit: BPF prog-id=59 op=LOAD Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.656000 audit: BPF prog-id=60 op=LOAD Oct 2 19:39:41.656000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:39:41.656000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:39:41.659000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.659000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.659000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.659000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.659000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.659000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.659000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.659000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.659000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.659000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.659000 audit: BPF prog-id=61 op=LOAD Oct 2 19:39:41.659000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:39:41.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.660000 audit: BPF prog-id=62 op=LOAD Oct 2 19:39:41.660000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:39:41.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.662000 audit: BPF prog-id=63 op=LOAD Oct 2 19:39:41.662000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit: BPF prog-id=64 op=LOAD Oct 2 19:39:41.665000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit: BPF prog-id=65 op=LOAD Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit: BPF prog-id=66 op=LOAD Oct 2 19:39:41.666000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:39:41.666000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.666000 audit: BPF prog-id=67 op=LOAD Oct 2 19:39:41.666000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit: BPF prog-id=68 op=LOAD Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:41.667000 audit: BPF prog-id=69 op=LOAD Oct 2 19:39:41.667000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:39:41.667000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:39:41.691529 systemd[1]: Started kubelet.service. Oct 2 19:39:41.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:41.752456 kubelet[1522]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:39:41.752456 kubelet[1522]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:39:41.752456 kubelet[1522]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:39:41.753044 kubelet[1522]: I1002 19:39:41.752548 1522 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:39:42.734830 kubelet[1522]: I1002 19:39:42.734772 1522 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Oct 2 19:39:42.734830 kubelet[1522]: I1002 19:39:42.734810 1522 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:39:42.735174 kubelet[1522]: I1002 19:39:42.735133 1522 server.go:895] "Client rotation is on, will bootstrap in background" Oct 2 19:39:42.741643 kubelet[1522]: I1002 19:39:42.741609 1522 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:39:42.750267 kubelet[1522]: I1002 19:39:42.750231 1522 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:39:42.750711 kubelet[1522]: I1002 19:39:42.750675 1522 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:39:42.750942 kubelet[1522]: I1002 19:39:42.750904 1522 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 2 19:39:42.750942 kubelet[1522]: I1002 19:39:42.750941 1522 topology_manager.go:138] "Creating topology manager with none policy" Oct 2 19:39:42.751198 kubelet[1522]: I1002 19:39:42.750957 1522 container_manager_linux.go:301] "Creating device plugin manager" Oct 2 19:39:42.751198 kubelet[1522]: I1002 19:39:42.751112 1522 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:39:42.751310 kubelet[1522]: I1002 19:39:42.751253 1522 kubelet.go:393] "Attempting to sync node with API server" Oct 2 19:39:42.751310 kubelet[1522]: I1002 19:39:42.751277 1522 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:39:42.751591 kubelet[1522]: I1002 19:39:42.751565 1522 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:39:42.751740 kubelet[1522]: I1002 19:39:42.751723 1522 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:39:42.752020 kubelet[1522]: E1002 19:39:42.751769 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:42.752020 kubelet[1522]: E1002 19:39:42.751681 1522 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:42.752880 kubelet[1522]: I1002 19:39:42.752858 1522 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:39:42.756686 kubelet[1522]: W1002 19:39:42.756649 1522 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:39:42.757375 kubelet[1522]: I1002 19:39:42.757350 1522 server.go:1232] "Started kubelet" Oct 2 19:39:42.757610 kubelet[1522]: I1002 19:39:42.757591 1522 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:39:42.758729 kubelet[1522]: I1002 19:39:42.758708 1522 server.go:462] "Adding debug handlers to kubelet server" Oct 2 19:39:42.760000 audit[1522]: AVC avc: denied { mac_admin } for pid=1522 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:42.760000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:42.760000 audit[1522]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a0b6e0 a1=c000b6c228 a2=c000a0b6b0 a3=25 items=0 ppid=1 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.763295 kubelet[1522]: E1002 19:39:42.762671 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7bf6b214f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 757314895, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 757314895, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:42.763295 kubelet[1522]: W1002 19:39:42.762816 1522 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.128.0.92" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:42.763295 kubelet[1522]: E1002 19:39:42.762850 1522 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.92" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:42.763629 kubelet[1522]: W1002 19:39:42.762892 1522 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:42.763629 kubelet[1522]: E1002 19:39:42.762907 1522 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:42.763629 kubelet[1522]: I1002 19:39:42.757699 1522 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:39:42.763629 kubelet[1522]: I1002 19:39:42.763165 1522 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 2 19:39:42.763838 kubelet[1522]: E1002 19:39:42.763633 1522 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:39:42.763838 kubelet[1522]: E1002 19:39:42.763660 1522 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:39:42.760000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:42.764421 kubelet[1522]: I1002 19:39:42.764402 1522 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:39:42.763000 audit[1522]: AVC avc: denied { mac_admin } for pid=1522 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:42.763000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:42.763000 audit[1522]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00061ffc0 a1=c000b6c240 a2=c000a0b770 a3=25 items=0 ppid=1 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.763000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:42.764965 kubelet[1522]: I1002 19:39:42.764947 1522 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:39:42.765143 kubelet[1522]: I1002 19:39:42.765130 1522 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:39:42.766679 kubelet[1522]: E1002 19:39:42.766076 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7bfcbc23a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 763647546, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 763647546, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:42.771850 kubelet[1522]: E1002 19:39:42.770892 1522 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.92\" not found" Oct 2 19:39:42.771850 kubelet[1522]: I1002 19:39:42.770926 1522 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 2 19:39:42.771850 kubelet[1522]: I1002 19:39:42.771054 1522 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:39:42.771850 kubelet[1522]: I1002 19:39:42.771115 1522 reconciler_new.go:29] "Reconciler: start to sync state" Oct 2 19:39:42.772778 kubelet[1522]: E1002 19:39:42.772514 1522 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.92\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:39:42.772778 kubelet[1522]: W1002 19:39:42.772726 1522 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:42.772778 kubelet[1522]: E1002 19:39:42.772756 1522 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:42.819051 kubelet[1522]: E1002 19:39:42.818940 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3011289", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.92 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817473161, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817473161, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:42.819873 kubelet[1522]: I1002 19:39:42.819848 1522 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:39:42.820050 kubelet[1522]: I1002 19:39:42.820035 1522 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:39:42.820146 kubelet[1522]: I1002 19:39:42.820134 1522 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:39:42.820747 kubelet[1522]: E1002 19:39:42.820657 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3013119", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.92 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817480985, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817480985, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:42.821980 kubelet[1522]: E1002 19:39:42.821879 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3019b55", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.92 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817508181, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817508181, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:42.822791 kubelet[1522]: I1002 19:39:42.822772 1522 policy_none.go:49] "None policy: Start" Oct 2 19:39:42.823707 kubelet[1522]: I1002 19:39:42.823689 1522 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:39:42.823890 kubelet[1522]: I1002 19:39:42.823875 1522 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:39:42.825000 audit[1537]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:42.825000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffec60a130 a2=0 a3=7fffec60a11c items=0 ppid=1522 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:39:42.827000 audit[1541]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:42.827000 audit[1541]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe84bb0d00 a2=0 a3=7ffe84bb0cec items=0 ppid=1522 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.827000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:39:42.834604 systemd[1]: Created slice kubepods.slice. Oct 2 19:39:42.841605 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:39:42.846235 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:39:42.831000 audit[1543]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:42.831000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffecf7221f0 a2=0 a3=7ffecf7221dc items=0 ppid=1522 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:39:42.854780 kubelet[1522]: I1002 19:39:42.854749 1522 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:39:42.853000 audit[1522]: AVC avc: denied { mac_admin } for pid=1522 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:42.853000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:42.853000 audit[1522]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e7d170 a1=c000cbc2d0 a2=c000e7d140 a3=25 items=0 ppid=1 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.853000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:42.855193 kubelet[1522]: I1002 19:39:42.854882 1522 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:39:42.856738 kubelet[1522]: I1002 19:39:42.856717 1522 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:39:42.857262 kubelet[1522]: E1002 19:39:42.857239 1522 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.92\" not found" Oct 2 19:39:42.858000 audit[1548]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:42.858000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffde963b100 a2=0 a3=7ffde963b0ec items=0 ppid=1522 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:39:42.861289 kubelet[1522]: E1002 19:39:42.861204 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c57ba82c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 859061292, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 859061292, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:42.872252 kubelet[1522]: I1002 19:39:42.872201 1522 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.92" Oct 2 19:39:42.874827 kubelet[1522]: E1002 19:39:42.874727 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3011289", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.92 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817473161, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 872123179, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events "10.128.0.92.178a61a7c3011289" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:42.875132 kubelet[1522]: E1002 19:39:42.875106 1522 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.92" Oct 2 19:39:42.876065 kubelet[1522]: E1002 19:39:42.875981 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3013119", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.92 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817480985, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 872130580, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events "10.128.0.92.178a61a7c3013119" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:42.877140 kubelet[1522]: E1002 19:39:42.877056 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3019b55", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.92 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817508181, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 872157168, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events "10.128.0.92.178a61a7c3019b55" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:42.916000 audit[1553]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:42.916000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcaf6e7d30 a2=0 a3=7ffcaf6e7d1c items=0 ppid=1522 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:39:42.918524 kubelet[1522]: I1002 19:39:42.918468 1522 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 2 19:39:42.918000 audit[1554]: NETFILTER_CFG table=mangle:7 family=2 entries=1 op=nft_register_chain pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:42.918000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff43a03680 a2=0 a3=7fff43a0366c items=0 ppid=1522 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:39:42.919000 audit[1555]: NETFILTER_CFG table=mangle:8 family=10 entries=2 op=nft_register_chain pid=1555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:42.919000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd1b3bb000 a2=0 a3=7ffd1b3bafec items=0 ppid=1522 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:39:42.921411 kubelet[1522]: I1002 19:39:42.921366 1522 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 2 19:39:42.921411 kubelet[1522]: I1002 19:39:42.921395 1522 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 2 19:39:42.921575 kubelet[1522]: I1002 19:39:42.921428 1522 kubelet.go:2303] "Starting kubelet main sync loop" Oct 2 19:39:42.921575 kubelet[1522]: E1002 19:39:42.921519 1522 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:39:42.923504 kubelet[1522]: W1002 19:39:42.923442 1522 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:42.923504 kubelet[1522]: E1002 19:39:42.923503 1522 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:42.922000 audit[1557]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=1557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:42.922000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1eabb330 a2=0 a3=7ffc1eabb31c items=0 ppid=1522 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:39:42.923000 audit[1556]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:42.923000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffefe52f2a0 a2=0 a3=7ffefe52f28c items=0 ppid=1522 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:39:42.924000 audit[1558]: NETFILTER_CFG table=nat:11 family=10 entries=2 op=nft_register_chain pid=1558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:42.924000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd50cc64a0 a2=0 a3=7ffd50cc648c items=0 ppid=1522 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:39:42.926000 audit[1559]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:42.926000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0fd7e830 a2=0 a3=7ffd0fd7e81c items=0 ppid=1522 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.926000 audit[1560]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:42.926000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd3654c610 a2=0 a3=7ffd3654c5fc items=0 ppid=1522 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:42.926000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:39:42.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:39:42.974057 kubelet[1522]: E1002 19:39:42.973973 1522 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.92\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:39:43.076845 kubelet[1522]: I1002 19:39:43.076524 1522 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.92" Oct 2 19:39:43.082305 kubelet[1522]: E1002 19:39:43.077821 1522 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.92" Oct 2 19:39:43.082305 kubelet[1522]: E1002 19:39:43.078270 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3011289", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.92 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817473161, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 43, 76440343, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events "10.128.0.92.178a61a7c3011289" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:43.082639 kubelet[1522]: E1002 19:39:43.079842 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3013119", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.92 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817480985, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 43, 76448315, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events "10.128.0.92.178a61a7c3013119" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:43.082731 kubelet[1522]: E1002 19:39:43.081127 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3019b55", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.92 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817508181, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 43, 76452841, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events "10.128.0.92.178a61a7c3019b55" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:43.376617 kubelet[1522]: E1002 19:39:43.376463 1522 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.92\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:39:43.479224 kubelet[1522]: I1002 19:39:43.479165 1522 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.92" Oct 2 19:39:43.480890 kubelet[1522]: E1002 19:39:43.480847 1522 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.92" Oct 2 19:39:43.480890 kubelet[1522]: E1002 19:39:43.480811 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3011289", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.92 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817473161, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 43, 479102206, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events "10.128.0.92.178a61a7c3011289" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:43.482146 kubelet[1522]: E1002 19:39:43.482045 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3013119", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.92 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817480985, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 43, 479117040, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events "10.128.0.92.178a61a7c3013119" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:43.483268 kubelet[1522]: E1002 19:39:43.483171 1522 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.92.178a61a7c3019b55", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.92", UID:"10.128.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.92 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.92"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 42, 817508181, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 43, 479121371, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.92"}': 'events "10.128.0.92.178a61a7c3019b55" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:43.741390 kubelet[1522]: I1002 19:39:43.741320 1522 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:39:43.752699 kubelet[1522]: E1002 19:39:43.752619 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:44.131782 kubelet[1522]: E1002 19:39:44.131643 1522 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.128.0.92" not found Oct 2 19:39:44.182986 kubelet[1522]: E1002 19:39:44.182938 1522 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.92\" not found" node="10.128.0.92" Oct 2 19:39:44.282806 kubelet[1522]: I1002 19:39:44.282756 1522 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.92" Oct 2 19:39:44.287690 kubelet[1522]: I1002 19:39:44.287643 1522 kubelet_node_status.go:73] "Successfully registered node" node="10.128.0.92" Oct 2 19:39:44.307248 kubelet[1522]: I1002 19:39:44.307203 1522 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:39:44.307744 env[1128]: time="2023-10-02T19:39:44.307678391Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:39:44.308861 kubelet[1522]: I1002 19:39:44.308790 1522 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:39:44.712727 sudo[1337]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:44.711000 audit[1337]: USER_END pid=1337 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:44.719070 kernel: kauditd_printk_skb: 478 callbacks suppressed Oct 2 19:39:44.719189 kernel: audit: type=1106 audit(1696275584.711:610): pid=1337 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:44.711000 audit[1337]: CRED_DISP pid=1337 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:44.753715 kubelet[1522]: E1002 19:39:44.753667 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:44.754018 kubelet[1522]: I1002 19:39:44.753963 1522 apiserver.go:52] "Watching apiserver" Oct 2 19:39:44.757290 kubelet[1522]: I1002 19:39:44.757259 1522 topology_manager.go:215] "Topology Admit Handler" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" podNamespace="kube-system" podName="cilium-qt9fs" Oct 2 19:39:44.757634 kubelet[1522]: I1002 19:39:44.757616 1522 topology_manager.go:215] "Topology Admit Handler" podUID="bd0d3761-927a-43ad-a38f-9a2249ac4da3" podNamespace="kube-system" podName="kube-proxy-242bh" Oct 2 19:39:44.766248 kernel: audit: type=1104 audit(1696275584.711:611): pid=1337 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:44.768805 sshd[1334]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:44.770000 audit[1334]: USER_END pid=1334 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:44.774463 systemd[1]: sshd@6-10.128.0.92:22-147.75.109.163:46884.service: Deactivated successfully. Oct 2 19:39:44.775668 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:39:44.777580 kubelet[1522]: I1002 19:39:44.777555 1522 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:39:44.783253 kubelet[1522]: I1002 19:39:44.783229 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-run\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.783575 kubelet[1522]: I1002 19:39:44.783559 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-hostproc\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.783806 kubelet[1522]: I1002 19:39:44.783791 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-etc-cni-netd\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.784011 kubelet[1522]: I1002 19:39:44.783964 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-xtables-lock\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.784165 kubelet[1522]: I1002 19:39:44.784152 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd0d3761-927a-43ad-a38f-9a2249ac4da3-xtables-lock\") pod \"kube-proxy-242bh\" (UID: \"bd0d3761-927a-43ad-a38f-9a2249ac4da3\") " pod="kube-system/kube-proxy-242bh" Oct 2 19:39:44.784375 kubelet[1522]: I1002 19:39:44.784360 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd0d3761-927a-43ad-a38f-9a2249ac4da3-kube-proxy\") pod \"kube-proxy-242bh\" (UID: \"bd0d3761-927a-43ad-a38f-9a2249ac4da3\") " pod="kube-system/kube-proxy-242bh" Oct 2 19:39:44.784616 kubelet[1522]: I1002 19:39:44.784566 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd0d3761-927a-43ad-a38f-9a2249ac4da3-lib-modules\") pod \"kube-proxy-242bh\" (UID: \"bd0d3761-927a-43ad-a38f-9a2249ac4da3\") " pod="kube-system/kube-proxy-242bh" Oct 2 19:39:44.784773 kubelet[1522]: I1002 19:39:44.784759 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2gg\" (UniqueName: \"kubernetes.io/projected/bd0d3761-927a-43ad-a38f-9a2249ac4da3-kube-api-access-xj2gg\") pod \"kube-proxy-242bh\" (UID: \"bd0d3761-927a-43ad-a38f-9a2249ac4da3\") " pod="kube-system/kube-proxy-242bh" Oct 2 19:39:44.784988 kubelet[1522]: I1002 19:39:44.784974 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cni-path\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.785180 kubelet[1522]: I1002 19:39:44.785145 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-lib-modules\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.785376 kubelet[1522]: I1002 19:39:44.785342 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-host-proc-sys-net\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.785610 kubelet[1522]: I1002 19:39:44.785597 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-host-proc-sys-kernel\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.785821 kubelet[1522]: I1002 19:39:44.785776 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltl9w\" (UniqueName: \"kubernetes.io/projected/e41d8468-c486-4f54-9489-19b4b7dd3190-kube-api-access-ltl9w\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.785990 kubelet[1522]: I1002 19:39:44.785952 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-bpf-maps\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.786166 kubelet[1522]: I1002 19:39:44.786132 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-cgroup\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.786422 kubelet[1522]: I1002 19:39:44.786369 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e41d8468-c486-4f54-9489-19b4b7dd3190-clustermesh-secrets\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.786587 kubelet[1522]: I1002 19:39:44.786575 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-config-path\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.786783 kubelet[1522]: I1002 19:39:44.786746 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e41d8468-c486-4f54-9489-19b4b7dd3190-hubble-tls\") pod \"cilium-qt9fs\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " pod="kube-system/cilium-qt9fs" Oct 2 19:39:44.792643 systemd[1]: Created slice kubepods-besteffort-podbd0d3761_927a_43ad_a38f_9a2249ac4da3.slice. Oct 2 19:39:44.793557 systemd-logind[1117]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:39:44.795572 systemd-logind[1117]: Removed session 7. Oct 2 19:39:44.770000 audit[1334]: CRED_DISP pid=1334 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:44.829326 kernel: audit: type=1106 audit(1696275584.770:612): pid=1334 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:44.829453 kernel: audit: type=1104 audit(1696275584.770:613): pid=1334 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:39:44.829549 kernel: audit: type=1131 audit(1696275584.771:614): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.92:22-147.75.109.163:46884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:44.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.92:22-147.75.109.163:46884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:44.867205 systemd[1]: Created slice kubepods-burstable-pode41d8468_c486_4f54_9489_19b4b7dd3190.slice. Oct 2 19:39:45.163804 env[1128]: time="2023-10-02T19:39:45.163656800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-242bh,Uid:bd0d3761-927a-43ad-a38f-9a2249ac4da3,Namespace:kube-system,Attempt:0,}" Oct 2 19:39:45.175524 env[1128]: time="2023-10-02T19:39:45.175449957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qt9fs,Uid:e41d8468-c486-4f54-9489-19b4b7dd3190,Namespace:kube-system,Attempt:0,}" Oct 2 19:39:45.715927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642353497.mount: Deactivated successfully. Oct 2 19:39:45.728479 env[1128]: time="2023-10-02T19:39:45.728420296Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:45.729995 env[1128]: time="2023-10-02T19:39:45.729936043Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:45.733881 env[1128]: time="2023-10-02T19:39:45.733799376Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:45.735097 env[1128]: time="2023-10-02T19:39:45.735041014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:45.736135 env[1128]: time="2023-10-02T19:39:45.736099878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:45.738383 env[1128]: time="2023-10-02T19:39:45.738330382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:45.739390 env[1128]: time="2023-10-02T19:39:45.739352704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:45.748555 env[1128]: time="2023-10-02T19:39:45.748433184Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:45.754386 kubelet[1522]: E1002 19:39:45.754332 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:45.776046 env[1128]: time="2023-10-02T19:39:45.773282852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:39:45.776046 env[1128]: time="2023-10-02T19:39:45.773334979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:39:45.776046 env[1128]: time="2023-10-02T19:39:45.773354873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:39:45.776046 env[1128]: time="2023-10-02T19:39:45.773551737Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c pid=1575 runtime=io.containerd.runc.v2 Oct 2 19:39:45.779683 env[1128]: time="2023-10-02T19:39:45.779579702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:39:45.779898 env[1128]: time="2023-10-02T19:39:45.779642891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:39:45.779898 env[1128]: time="2023-10-02T19:39:45.779663653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:39:45.780879 env[1128]: time="2023-10-02T19:39:45.780789720Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee3d9a13cf6e946e88de367fc62c29fa92700fd72dd4d6b0974f2b51cc4a7a44 pid=1590 runtime=io.containerd.runc.v2 Oct 2 19:39:45.802420 systemd[1]: Started cri-containerd-1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c.scope. Oct 2 19:39:45.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.871354 kernel: audit: type=1400 audit(1696275585.829:615): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.871548 kernel: audit: type=1400 audit(1696275585.829:616): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.893519 kernel: audit: type=1400 audit(1696275585.829:617): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.911461 systemd[1]: Started cri-containerd-ee3d9a13cf6e946e88de367fc62c29fa92700fd72dd4d6b0974f2b51cc4a7a44.scope. Oct 2 19:39:45.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.938528 kernel: audit: type=1400 audit(1696275585.829:618): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.972522 kernel: audit: type=1400 audit(1696275585.829:619): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.976023 env[1128]: time="2023-10-02T19:39:45.975973568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qt9fs,Uid:e41d8468-c486-4f54-9489-19b4b7dd3190,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\"" Oct 2 19:39:45.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.892000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.893000 audit: BPF prog-id=70 op=LOAD Oct 2 19:39:45.893000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.893000 audit[1600]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1575 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:45.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166323433303961316463356334623566666164343535366339343562 Oct 2 19:39:45.893000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.893000 audit[1600]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1575 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:45.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166323433303961316463356334623566666164343535366339343562 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit: BPF prog-id=71 op=LOAD Oct 2 19:39:45.898000 audit[1600]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002143f0 items=0 ppid=1575 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:45.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166323433303961316463356334623566666164343535366339343562 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit: BPF prog-id=72 op=LOAD Oct 2 19:39:45.898000 audit[1600]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000214438 items=0 ppid=1575 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:45.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166323433303961316463356334623566666164343535366339343562 Oct 2 19:39:45.898000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:39:45.898000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { perfmon } for pid=1600 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit[1600]: AVC avc: denied { bpf } for pid=1600 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.898000 audit: BPF prog-id=73 op=LOAD Oct 2 19:39:45.898000 audit[1600]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000214848 items=0 ppid=1575 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:45.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166323433303961316463356334623566666164343535366339343562 Oct 2 19:39:45.954000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.954000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.954000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.973000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:45.983075 kubelet[1522]: E1002 19:39:45.983033 1522 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Oct 2 19:39:45.985548 env[1128]: time="2023-10-02T19:39:45.985473913Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:39:45.997624 env[1128]: time="2023-10-02T19:39:45.997528315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-242bh,Uid:bd0d3761-927a-43ad-a38f-9a2249ac4da3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee3d9a13cf6e946e88de367fc62c29fa92700fd72dd4d6b0974f2b51cc4a7a44\"" Oct 2 19:39:46.754737 kubelet[1522]: E1002 19:39:46.754668 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:47.755586 kubelet[1522]: E1002 19:39:47.755534 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:48.755849 kubelet[1522]: E1002 19:39:48.755769 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:49.756714 kubelet[1522]: E1002 19:39:49.756617 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:50.757722 kubelet[1522]: E1002 19:39:50.757646 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:51.274771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26022630.mount: Deactivated successfully. Oct 2 19:39:51.758261 kubelet[1522]: E1002 19:39:51.758213 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:52.758707 kubelet[1522]: E1002 19:39:52.758665 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:53.759164 kubelet[1522]: E1002 19:39:53.759112 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:54.574094 env[1128]: time="2023-10-02T19:39:54.574015817Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:54.576847 env[1128]: time="2023-10-02T19:39:54.576771594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:54.579237 env[1128]: time="2023-10-02T19:39:54.579193961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:54.580062 env[1128]: time="2023-10-02T19:39:54.580007450Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 2 19:39:54.581461 env[1128]: time="2023-10-02T19:39:54.581408625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\"" Oct 2 19:39:54.583747 env[1128]: time="2023-10-02T19:39:54.583693294Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:39:54.599811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130798603.mount: Deactivated successfully. Oct 2 19:39:54.611967 env[1128]: time="2023-10-02T19:39:54.611906099Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162\"" Oct 2 19:39:54.612753 env[1128]: time="2023-10-02T19:39:54.612692905Z" level=info msg="StartContainer for \"6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162\"" Oct 2 19:39:54.651916 systemd[1]: Started cri-containerd-6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162.scope. Oct 2 19:39:54.666713 systemd[1]: cri-containerd-6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162.scope: Deactivated successfully. Oct 2 19:39:54.759521 kubelet[1522]: E1002 19:39:54.759440 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:55.595725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162-rootfs.mount: Deactivated successfully. Oct 2 19:39:55.760357 kubelet[1522]: E1002 19:39:55.760296 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:56.485036 env[1128]: time="2023-10-02T19:39:56.484940970Z" level=info msg="shim disconnected" id=6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162 Oct 2 19:39:56.485036 env[1128]: time="2023-10-02T19:39:56.485018643Z" level=warning msg="cleaning up after shim disconnected" id=6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162 namespace=k8s.io Oct 2 19:39:56.485036 env[1128]: time="2023-10-02T19:39:56.485034877Z" level=info msg="cleaning up dead shim" Oct 2 19:39:56.496854 env[1128]: time="2023-10-02T19:39:56.496773523Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:39:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1678 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:39:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:39:56.497287 env[1128]: time="2023-10-02T19:39:56.497137056Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Oct 2 19:39:56.497618 env[1128]: time="2023-10-02T19:39:56.497560771Z" level=error msg="Failed to pipe stdout of container \"6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162\"" error="reading from a closed fifo" Oct 2 19:39:56.498051 env[1128]: time="2023-10-02T19:39:56.497783618Z" level=error msg="Failed to pipe stderr of container \"6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162\"" error="reading from a closed fifo" Oct 2 19:39:56.500577 env[1128]: time="2023-10-02T19:39:56.500476369Z" level=error msg="StartContainer for \"6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:39:56.500934 kubelet[1522]: E1002 19:39:56.500904 1522 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162" Oct 2 19:39:56.501117 kubelet[1522]: E1002 19:39:56.501093 1522 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:39:56.501117 kubelet[1522]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:39:56.501117 kubelet[1522]: rm /hostbin/cilium-mount Oct 2 19:39:56.501309 kubelet[1522]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ltl9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:39:56.501309 kubelet[1522]: E1002 19:39:56.501167 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:39:56.761689 kubelet[1522]: E1002 19:39:56.760563 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:56.981370 env[1128]: time="2023-10-02T19:39:56.981306987Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:39:57.028425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489281527.mount: Deactivated successfully. Oct 2 19:39:57.039577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2734882848.mount: Deactivated successfully. Oct 2 19:39:57.043821 env[1128]: time="2023-10-02T19:39:57.043759269Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\"" Oct 2 19:39:57.045107 env[1128]: time="2023-10-02T19:39:57.045061747Z" level=info msg="StartContainer for \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\"" Oct 2 19:39:57.082195 systemd[1]: Started cri-containerd-3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597.scope. Oct 2 19:39:57.104783 systemd[1]: cri-containerd-3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597.scope: Deactivated successfully. Oct 2 19:39:57.138087 env[1128]: time="2023-10-02T19:39:57.138012280Z" level=info msg="shim disconnected" id=3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597 Oct 2 19:39:57.138087 env[1128]: time="2023-10-02T19:39:57.138090324Z" level=warning msg="cleaning up after shim disconnected" id=3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597 namespace=k8s.io Oct 2 19:39:57.138087 env[1128]: time="2023-10-02T19:39:57.138104618Z" level=info msg="cleaning up dead shim" Oct 2 19:39:57.160609 env[1128]: time="2023-10-02T19:39:57.160544723Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:39:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1718 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:39:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:39:57.161228 env[1128]: time="2023-10-02T19:39:57.161145880Z" level=error msg="copy shim log" error="read /proc/self/fd/50: file already closed" Oct 2 19:39:57.161712 env[1128]: time="2023-10-02T19:39:57.161652670Z" level=error msg="Failed to pipe stdout of container \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\"" error="reading from a closed fifo" Oct 2 19:39:57.162156 env[1128]: time="2023-10-02T19:39:57.161859872Z" level=error msg="Failed to pipe stderr of container \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\"" error="reading from a closed fifo" Oct 2 19:39:57.164300 env[1128]: time="2023-10-02T19:39:57.164247496Z" level=error msg="StartContainer for \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:39:57.164749 kubelet[1522]: E1002 19:39:57.164698 1522 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597" Oct 2 19:39:57.164899 kubelet[1522]: E1002 19:39:57.164859 1522 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:39:57.164899 kubelet[1522]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:39:57.164899 kubelet[1522]: rm /hostbin/cilium-mount Oct 2 19:39:57.164899 kubelet[1522]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ltl9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:39:57.165233 kubelet[1522]: E1002 19:39:57.164918 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:39:57.751899 kernel: kauditd_printk_skb: 175 callbacks suppressed Oct 2 19:39:57.752090 kernel: audit: type=1131 audit(1696275597.722:642): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:57.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:57.723389 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:39:57.761058 kubelet[1522]: E1002 19:39:57.760973 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:57.770000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:39:57.770000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:39:57.786194 kernel: audit: type=1334 audit(1696275597.770:643): prog-id=69 op=UNLOAD Oct 2 19:39:57.786307 kernel: audit: type=1334 audit(1696275597.770:644): prog-id=68 op=UNLOAD Oct 2 19:39:57.786363 kernel: audit: type=1334 audit(1696275597.770:645): prog-id=67 op=UNLOAD Oct 2 19:39:57.770000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:39:57.824948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995811563.mount: Deactivated successfully. Oct 2 19:39:57.982345 kubelet[1522]: I1002 19:39:57.982303 1522 scope.go:117] "RemoveContainer" containerID="6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162" Oct 2 19:39:57.983319 kubelet[1522]: I1002 19:39:57.983289 1522 scope.go:117] "RemoveContainer" containerID="6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162" Oct 2 19:39:57.985435 env[1128]: time="2023-10-02T19:39:57.985381511Z" level=info msg="RemoveContainer for \"6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162\"" Oct 2 19:39:57.991603 env[1128]: time="2023-10-02T19:39:57.991555129Z" level=info msg="RemoveContainer for \"6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162\" returns successfully" Oct 2 19:39:57.992046 env[1128]: time="2023-10-02T19:39:57.992007502Z" level=info msg="RemoveContainer for \"6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162\"" Oct 2 19:39:57.992162 env[1128]: time="2023-10-02T19:39:57.992049583Z" level=info msg="RemoveContainer for \"6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162\" returns successfully" Oct 2 19:39:57.992830 kubelet[1522]: E1002 19:39:57.992794 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:39:58.111124 env[1128]: time="2023-10-02T19:39:58.110960975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:58.114189 env[1128]: time="2023-10-02T19:39:58.114131678Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:58.116441 env[1128]: time="2023-10-02T19:39:58.116392077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:58.118435 env[1128]: time="2023-10-02T19:39:58.118395298Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:58.119073 env[1128]: time="2023-10-02T19:39:58.119032502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\" returns image reference \"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0\"" Oct 2 19:39:58.121739 env[1128]: time="2023-10-02T19:39:58.121686740Z" level=info msg="CreateContainer within sandbox \"ee3d9a13cf6e946e88de367fc62c29fa92700fd72dd4d6b0974f2b51cc4a7a44\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:39:58.136571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3178100859.mount: Deactivated successfully. Oct 2 19:39:58.147088 env[1128]: time="2023-10-02T19:39:58.147028801Z" level=info msg="CreateContainer within sandbox \"ee3d9a13cf6e946e88de367fc62c29fa92700fd72dd4d6b0974f2b51cc4a7a44\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d5f2e4fc6ab1e2eebf85d9e88f3d41a09480a27915866266f098ed57cb98c61\"" Oct 2 19:39:58.147781 env[1128]: time="2023-10-02T19:39:58.147696236Z" level=info msg="StartContainer for \"2d5f2e4fc6ab1e2eebf85d9e88f3d41a09480a27915866266f098ed57cb98c61\"" Oct 2 19:39:58.184469 systemd[1]: Started cri-containerd-2d5f2e4fc6ab1e2eebf85d9e88f3d41a09480a27915866266f098ed57cb98c61.scope. Oct 2 19:39:58.204000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.204000 audit[1742]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c00011f6b0 a2=3c a3=8 items=0 ppid=1590 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.258208 kernel: audit: type=1400 audit(1696275598.204:646): avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.258386 kernel: audit: type=1300 audit(1696275598.204:646): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c00011f6b0 a2=3c a3=8 items=0 ppid=1590 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.258433 kernel: audit: type=1327 audit(1696275598.204:646): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264356632653466633661623165326565626638356439653838663364 Oct 2 19:39:58.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264356632653466633661623165326565626638356439653838663364 Oct 2 19:39:58.208000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.208000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.335200 kernel: audit: type=1400 audit(1696275598.208:647): avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.335401 kernel: audit: type=1400 audit(1696275598.208:647): avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.208000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.208000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.208000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.357567 kernel: audit: type=1400 audit(1696275598.208:647): avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.208000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.208000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.208000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.208000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.208000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.208000 audit: BPF prog-id=78 op=LOAD Oct 2 19:39:58.208000 audit[1742]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00011f9d8 a2=78 a3=c000380ae0 items=0 ppid=1590 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264356632653466633661623165326565626638356439653838663364 Oct 2 19:39:58.225000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.225000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.225000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.225000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.225000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.225000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.225000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.225000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.225000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.225000 audit: BPF prog-id=79 op=LOAD Oct 2 19:39:58.225000 audit[1742]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00011f770 a2=78 a3=c000380b28 items=0 ppid=1590 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264356632653466633661623165326565626638356439653838663364 Oct 2 19:39:58.285000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:39:58.285000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:39:58.285000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.285000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.285000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.285000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.285000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.285000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.285000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.285000 audit[1742]: AVC avc: denied { perfmon } for pid=1742 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.285000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.285000 audit[1742]: AVC avc: denied { bpf } for pid=1742 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:58.285000 audit: BPF prog-id=80 op=LOAD Oct 2 19:39:58.285000 audit[1742]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00011fc30 a2=78 a3=c000380bb8 items=0 ppid=1590 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264356632653466633661623165326565626638356439653838663364 Oct 2 19:39:58.360669 env[1128]: time="2023-10-02T19:39:58.360602663Z" level=info msg="StartContainer for \"2d5f2e4fc6ab1e2eebf85d9e88f3d41a09480a27915866266f098ed57cb98c61\" returns successfully" Oct 2 19:39:58.427000 audit[1795]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.427000 audit[1795]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2baeaf90 a2=0 a3=7ffd2baeaf7c items=0 ppid=1754 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:39:58.431000 audit[1796]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.431000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9f86d800 a2=0 a3=7ffe9f86d7ec items=0 ppid=1754 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:39:58.434000 audit[1797]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.434000 audit[1797]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffccd290b50 a2=0 a3=7ffccd290b3c items=0 ppid=1754 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.434000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:39:58.435000 audit[1798]: NETFILTER_CFG table=mangle:17 family=10 entries=1 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.435000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee06b1600 a2=0 a3=7ffee06b15ec items=0 ppid=1754 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.435000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:39:58.437000 audit[1799]: NETFILTER_CFG table=nat:18 family=10 entries=1 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.437000 audit[1799]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc096b9770 a2=0 a3=7ffc096b975c items=0 ppid=1754 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.437000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:39:58.439000 audit[1800]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.439000 audit[1800]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2502da00 a2=0 a3=7ffe2502d9ec items=0 ppid=1754 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:39:58.531000 audit[1801]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.531000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc075db2a0 a2=0 a3=7ffc075db28c items=0 ppid=1754 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.531000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:39:58.534000 audit[1803]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.534000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffff9f3b280 a2=0 a3=7ffff9f3b26c items=0 ppid=1754 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.534000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:39:58.539000 audit[1806]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.539000 audit[1806]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffedfa59d70 a2=0 a3=7ffedfa59d5c items=0 ppid=1754 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.539000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:39:58.541000 audit[1807]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.541000 audit[1807]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe38df59a0 a2=0 a3=7ffe38df598c items=0 ppid=1754 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:39:58.544000 audit[1809]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.544000 audit[1809]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe716e1320 a2=0 a3=7ffe716e130c items=0 ppid=1754 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:39:58.546000 audit[1810]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1810 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.546000 audit[1810]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffea7d2190 a2=0 a3=7fffea7d217c items=0 ppid=1754 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:39:58.550000 audit[1812]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.550000 audit[1812]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe3dc47d60 a2=0 a3=7ffe3dc47d4c items=0 ppid=1754 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.550000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:39:58.559000 audit[1815]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.559000 audit[1815]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffedb0df260 a2=0 a3=7ffedb0df24c items=0 ppid=1754 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.559000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:39:58.561000 audit[1816]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.561000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbd4d8f20 a2=0 a3=7fffbd4d8f0c items=0 ppid=1754 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:39:58.565000 audit[1818]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.565000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd90114b50 a2=0 a3=7ffd90114b3c items=0 ppid=1754 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:39:58.567000 audit[1819]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.567000 audit[1819]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8cd94c60 a2=0 a3=7ffd8cd94c4c items=0 ppid=1754 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:39:58.570000 audit[1821]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.570000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcee7bb270 a2=0 a3=7ffcee7bb25c items=0 ppid=1754 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:39:58.575000 audit[1824]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1824 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.575000 audit[1824]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9e5c5a00 a2=0 a3=7ffe9e5c59ec items=0 ppid=1754 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:39:58.581000 audit[1827]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.581000 audit[1827]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff675ef3f0 a2=0 a3=7fff675ef3dc items=0 ppid=1754 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:39:58.583000 audit[1828]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1828 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.583000 audit[1828]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe095b0600 a2=0 a3=7ffe095b05ec items=0 ppid=1754 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:39:58.586000 audit[1830]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.586000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe2896a220 a2=0 a3=7ffe2896a20c items=0 ppid=1754 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:58.619000 audit[1835]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.619000 audit[1835]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe2d4cac50 a2=0 a3=7ffe2d4cac3c items=0 ppid=1754 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:58.621000 audit[1836]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.621000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc86946b50 a2=0 a3=7ffc86946b3c items=0 ppid=1754 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.621000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:39:58.625000 audit[1838]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:58.625000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff43cb5e40 a2=0 a3=7fff43cb5e2c items=0 ppid=1754 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.625000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:39:58.645000 audit[1844]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:39:58.645000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7fff907ccfe0 a2=0 a3=7fff907ccfcc items=0 ppid=1754 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:58.665000 audit[1844]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:39:58.665000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff907ccfe0 a2=0 a3=7fff907ccfcc items=0 ppid=1754 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:58.667000 audit[1850]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.667000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc669bd6e0 a2=0 a3=7ffc669bd6cc items=0 ppid=1754 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:39:58.672000 audit[1852]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.672000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe7b2a3280 a2=0 a3=7ffe7b2a326c items=0 ppid=1754 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:39:58.678000 audit[1855]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1855 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.678000 audit[1855]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdcf560370 a2=0 a3=7ffdcf56035c items=0 ppid=1754 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.678000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:39:58.683000 audit[1856]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1856 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.683000 audit[1856]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbe4dd100 a2=0 a3=7fffbe4dd0ec items=0 ppid=1754 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.683000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:39:58.687000 audit[1858]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1858 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.687000 audit[1858]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc29f97780 a2=0 a3=7ffc29f9776c items=0 ppid=1754 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.687000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:39:58.689000 audit[1859]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1859 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.689000 audit[1859]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd702e2b60 a2=0 a3=7ffd702e2b4c items=0 ppid=1754 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:39:58.693000 audit[1861]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1861 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.693000 audit[1861]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeaf467be0 a2=0 a3=7ffeaf467bcc items=0 ppid=1754 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.693000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:39:58.699000 audit[1864]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1864 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.699000 audit[1864]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffef8734450 a2=0 a3=7ffef873443c items=0 ppid=1754 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.699000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:39:58.701000 audit[1865]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.701000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfb7c9a30 a2=0 a3=7ffcfb7c9a1c items=0 ppid=1754 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.701000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:39:58.704000 audit[1867]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1867 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.704000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcbdfa0df0 a2=0 a3=7ffcbdfa0ddc items=0 ppid=1754 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.704000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:39:58.706000 audit[1868]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1868 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.706000 audit[1868]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7c6217c0 a2=0 a3=7ffc7c6217ac items=0 ppid=1754 pid=1868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.706000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:39:58.709000 audit[1870]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.709000 audit[1870]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe83ec51d0 a2=0 a3=7ffe83ec51bc items=0 ppid=1754 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:39:58.715000 audit[1873]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1873 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.715000 audit[1873]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfdb5d9f0 a2=0 a3=7ffcfdb5d9dc items=0 ppid=1754 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:39:58.721000 audit[1876]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1876 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.721000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd037b4550 a2=0 a3=7ffd037b453c items=0 ppid=1754 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:39:58.722000 audit[1877]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.722000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5a2a43e0 a2=0 a3=7ffd5a2a43cc items=0 ppid=1754 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.722000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:39:58.726000 audit[1879]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.726000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffb50d57e0 a2=0 a3=7fffb50d57cc items=0 ppid=1754 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:58.730000 audit[1882]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1882 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.730000 audit[1882]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc9f945800 a2=0 a3=7ffc9f9457ec items=0 ppid=1754 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.730000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:58.732000 audit[1883]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.732000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb2fd6c20 a2=0 a3=7ffcb2fd6c0c items=0 ppid=1754 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.732000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:39:58.735000 audit[1885]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=1885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.735000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffda2205760 a2=0 a3=7ffda220574c items=0 ppid=1754 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.735000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:39:58.737000 audit[1886]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.737000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf02f47f0 a2=0 a3=7ffdf02f47dc items=0 ppid=1754 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.737000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:39:58.741000 audit[1888]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_rule pid=1888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.741000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd0fb76970 a2=0 a3=7ffd0fb7695c items=0 ppid=1754 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:39:58.747000 audit[1891]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_rule pid=1891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:58.747000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffe465cae0 a2=0 a3=7fffe465cacc items=0 ppid=1754 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:39:58.751000 audit[1893]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:39:58.751000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffdd807d890 a2=0 a3=7ffdd807d87c items=0 ppid=1754 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.751000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:58.752000 audit[1893]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:39:58.752000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffdd807d890 a2=0 a3=7ffdd807d87c items=0 ppid=1754 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:58.752000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:58.762210 kubelet[1522]: E1002 19:39:58.762143 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:58.988923 kubelet[1522]: E1002 19:39:58.988583 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:39:58.995663 kubelet[1522]: I1002 19:39:58.995603 1522 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-242bh" podStartSLOduration=2.875196742 podCreationTimestamp="2023-10-02 19:39:44 +0000 UTC" firstStartedPulling="2023-10-02 19:39:45.999067042 +0000 UTC m=+4.301026568" lastFinishedPulling="2023-10-02 19:39:58.119414953 +0000 UTC m=+16.421374475" observedRunningTime="2023-10-02 19:39:58.995285889 +0000 UTC m=+17.297245423" watchObservedRunningTime="2023-10-02 19:39:58.995544649 +0000 UTC m=+17.297504182" Oct 2 19:39:59.611912 kubelet[1522]: W1002 19:39:59.611824 1522 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41d8468_c486_4f54_9489_19b4b7dd3190.slice/cri-containerd-6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162.scope WatchSource:0}: container "6e009039cea73de1d0606fc2d7286501b9cac86f9937638cba24ae423ccca162" in namespace "k8s.io": not found Oct 2 19:39:59.762910 kubelet[1522]: E1002 19:39:59.762832 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:00.763688 kubelet[1522]: E1002 19:40:00.763609 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:01.763922 kubelet[1522]: E1002 19:40:01.763841 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:02.721840 kubelet[1522]: W1002 19:40:02.721756 1522 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41d8468_c486_4f54_9489_19b4b7dd3190.slice/cri-containerd-3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597.scope WatchSource:0}: task 3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597 not found: not found Oct 2 19:40:02.752610 kubelet[1522]: E1002 19:40:02.752541 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:02.764873 kubelet[1522]: E1002 19:40:02.764796 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:03.766047 kubelet[1522]: E1002 19:40:03.765978 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:04.766974 kubelet[1522]: E1002 19:40:04.766905 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:05.768079 kubelet[1522]: E1002 19:40:05.768017 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:06.769206 kubelet[1522]: E1002 19:40:06.769149 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:07.769833 kubelet[1522]: E1002 19:40:07.769768 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:08.771119 kubelet[1522]: E1002 19:40:08.771058 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:09.772411 kubelet[1522]: E1002 19:40:09.772342 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:10.772796 kubelet[1522]: E1002 19:40:10.772760 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:11.774002 kubelet[1522]: E1002 19:40:11.773936 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:11.819125 update_engine[1119]: I1002 19:40:11.818995 1119 update_attempter.cc:505] Updating boot flags... Oct 2 19:40:12.774114 kubelet[1522]: E1002 19:40:12.774067 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:13.775974 kubelet[1522]: E1002 19:40:13.775908 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:13.924967 env[1128]: time="2023-10-02T19:40:13.924904254Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:40:13.939449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320712541.mount: Deactivated successfully. Oct 2 19:40:13.950850 env[1128]: time="2023-10-02T19:40:13.950787732Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\"" Oct 2 19:40:13.951795 env[1128]: time="2023-10-02T19:40:13.951753084Z" level=info msg="StartContainer for \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\"" Oct 2 19:40:13.980262 systemd[1]: Started cri-containerd-31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b.scope. Oct 2 19:40:14.000862 systemd[1]: cri-containerd-31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b.scope: Deactivated successfully. Oct 2 19:40:14.258750 env[1128]: time="2023-10-02T19:40:14.258679108Z" level=info msg="shim disconnected" id=31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b Oct 2 19:40:14.258750 env[1128]: time="2023-10-02T19:40:14.258748585Z" level=warning msg="cleaning up after shim disconnected" id=31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b namespace=k8s.io Oct 2 19:40:14.259128 env[1128]: time="2023-10-02T19:40:14.258762370Z" level=info msg="cleaning up dead shim" Oct 2 19:40:14.282857 env[1128]: time="2023-10-02T19:40:14.282790339Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:40:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1938 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:40:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:40:14.283475 env[1128]: time="2023-10-02T19:40:14.283390250Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:40:14.283858 env[1128]: time="2023-10-02T19:40:14.283706429Z" level=error msg="Failed to pipe stderr of container \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\"" error="reading from a closed fifo" Oct 2 19:40:14.284041 env[1128]: time="2023-10-02T19:40:14.283774620Z" level=error msg="Failed to pipe stdout of container \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\"" error="reading from a closed fifo" Oct 2 19:40:14.286196 env[1128]: time="2023-10-02T19:40:14.286136161Z" level=error msg="StartContainer for \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:40:14.286469 kubelet[1522]: E1002 19:40:14.286439 1522 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b" Oct 2 19:40:14.286658 kubelet[1522]: E1002 19:40:14.286623 1522 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:40:14.286658 kubelet[1522]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:40:14.286658 kubelet[1522]: rm /hostbin/cilium-mount Oct 2 19:40:14.286658 kubelet[1522]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ltl9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:40:14.286937 kubelet[1522]: E1002 19:40:14.286686 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:40:14.776836 kubelet[1522]: E1002 19:40:14.776778 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:14.935647 systemd[1]: run-containerd-runc-k8s.io-31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b-runc.8ddgjz.mount: Deactivated successfully. Oct 2 19:40:14.935797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b-rootfs.mount: Deactivated successfully. Oct 2 19:40:15.025706 kubelet[1522]: I1002 19:40:15.025641 1522 scope.go:117] "RemoveContainer" containerID="3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597" Oct 2 19:40:15.026162 kubelet[1522]: I1002 19:40:15.026136 1522 scope.go:117] "RemoveContainer" containerID="3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597" Oct 2 19:40:15.028211 env[1128]: time="2023-10-02T19:40:15.028100031Z" level=info msg="RemoveContainer for \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\"" Oct 2 19:40:15.029370 env[1128]: time="2023-10-02T19:40:15.029311577Z" level=info msg="RemoveContainer for \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\"" Oct 2 19:40:15.029507 env[1128]: time="2023-10-02T19:40:15.029445505Z" level=error msg="RemoveContainer for \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\" failed" error="failed to set removing state for container \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\": container is already in removing state" Oct 2 19:40:15.029754 kubelet[1522]: E1002 19:40:15.029731 1522 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\": container is already in removing state" containerID="3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597" Oct 2 19:40:15.029865 kubelet[1522]: E1002 19:40:15.029856 1522 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597": container is already in removing state; Skipping pod "cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)" Oct 2 19:40:15.031066 kubelet[1522]: E1002 19:40:15.030451 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:40:15.034798 env[1128]: time="2023-10-02T19:40:15.034742963Z" level=info msg="RemoveContainer for \"3e2523581a7f3162414b80f9eb824c3053e250bce07473922cae717410b22597\" returns successfully" Oct 2 19:40:15.777548 kubelet[1522]: E1002 19:40:15.777496 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:16.778031 kubelet[1522]: E1002 19:40:16.777966 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:17.363536 kubelet[1522]: W1002 19:40:17.363459 1522 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41d8468_c486_4f54_9489_19b4b7dd3190.slice/cri-containerd-31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b.scope WatchSource:0}: task 31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b not found: not found Oct 2 19:40:17.778197 kubelet[1522]: E1002 19:40:17.778135 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:18.778399 kubelet[1522]: E1002 19:40:18.778331 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:19.779235 kubelet[1522]: E1002 19:40:19.779167 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:20.780062 kubelet[1522]: E1002 19:40:20.779981 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:21.780762 kubelet[1522]: E1002 19:40:21.780709 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:22.752703 kubelet[1522]: E1002 19:40:22.752659 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:22.781903 kubelet[1522]: E1002 19:40:22.781832 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:23.782635 kubelet[1522]: E1002 19:40:23.782565 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:24.783622 kubelet[1522]: E1002 19:40:24.783562 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:25.783798 kubelet[1522]: E1002 19:40:25.783725 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:26.784695 kubelet[1522]: E1002 19:40:26.784622 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:27.785147 kubelet[1522]: E1002 19:40:27.785073 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:28.786108 kubelet[1522]: E1002 19:40:28.786043 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:29.787303 kubelet[1522]: E1002 19:40:29.787219 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:29.922851 kubelet[1522]: E1002 19:40:29.922780 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:40:30.788440 kubelet[1522]: E1002 19:40:30.788283 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:31.789422 kubelet[1522]: E1002 19:40:31.789342 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:32.790173 kubelet[1522]: E1002 19:40:32.790103 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:33.790654 kubelet[1522]: E1002 19:40:33.790581 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:34.791644 kubelet[1522]: E1002 19:40:34.791568 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:35.791794 kubelet[1522]: E1002 19:40:35.791723 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:36.792705 kubelet[1522]: E1002 19:40:36.792636 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:37.793177 kubelet[1522]: E1002 19:40:37.793093 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:38.794074 kubelet[1522]: E1002 19:40:38.793983 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:39.794443 kubelet[1522]: E1002 19:40:39.794376 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:40.794650 kubelet[1522]: E1002 19:40:40.794571 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:41.795796 kubelet[1522]: E1002 19:40:41.795720 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:41.926052 env[1128]: time="2023-10-02T19:40:41.925966154Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:40:41.940368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560734796.mount: Deactivated successfully. Oct 2 19:40:41.949550 env[1128]: time="2023-10-02T19:40:41.949470259Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\"" Oct 2 19:40:41.950618 env[1128]: time="2023-10-02T19:40:41.950542461Z" level=info msg="StartContainer for \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\"" Oct 2 19:40:41.977240 systemd[1]: Started cri-containerd-cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7.scope. Oct 2 19:40:41.999066 systemd[1]: cri-containerd-cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7.scope: Deactivated successfully. Oct 2 19:40:42.013000 env[1128]: time="2023-10-02T19:40:42.012910196Z" level=info msg="shim disconnected" id=cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7 Oct 2 19:40:42.013000 env[1128]: time="2023-10-02T19:40:42.012994646Z" level=warning msg="cleaning up after shim disconnected" id=cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7 namespace=k8s.io Oct 2 19:40:42.013000 env[1128]: time="2023-10-02T19:40:42.013009710Z" level=info msg="cleaning up dead shim" Oct 2 19:40:42.024979 env[1128]: time="2023-10-02T19:40:42.024902764Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:40:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1977 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:40:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:40:42.025340 env[1128]: time="2023-10-02T19:40:42.025252735Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:40:42.025621 env[1128]: time="2023-10-02T19:40:42.025560905Z" level=error msg="Failed to pipe stdout of container \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\"" error="reading from a closed fifo" Oct 2 19:40:42.025848 env[1128]: time="2023-10-02T19:40:42.025778645Z" level=error msg="Failed to pipe stderr of container \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\"" error="reading from a closed fifo" Oct 2 19:40:42.028566 env[1128]: time="2023-10-02T19:40:42.028441561Z" level=error msg="StartContainer for \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:40:42.029077 kubelet[1522]: E1002 19:40:42.028829 1522 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7" Oct 2 19:40:42.029077 kubelet[1522]: E1002 19:40:42.028985 1522 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:40:42.029077 kubelet[1522]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:40:42.029077 kubelet[1522]: rm /hostbin/cilium-mount Oct 2 19:40:42.029077 kubelet[1522]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ltl9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:40:42.029077 kubelet[1522]: E1002 19:40:42.029045 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:40:42.085816 kubelet[1522]: I1002 19:40:42.084607 1522 scope.go:117] "RemoveContainer" containerID="31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b" Oct 2 19:40:42.085816 kubelet[1522]: I1002 19:40:42.085279 1522 scope.go:117] "RemoveContainer" containerID="31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b" Oct 2 19:40:42.087304 env[1128]: time="2023-10-02T19:40:42.087253233Z" level=info msg="RemoveContainer for \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\"" Oct 2 19:40:42.088216 env[1128]: time="2023-10-02T19:40:42.088181655Z" level=info msg="RemoveContainer for \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\"" Oct 2 19:40:42.088588 env[1128]: time="2023-10-02T19:40:42.088538393Z" level=error msg="RemoveContainer for \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\" failed" error="failed to set removing state for container \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\": container is already in removing state" Oct 2 19:40:42.089089 kubelet[1522]: E1002 19:40:42.089058 1522 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\": container is already in removing state" containerID="31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b" Oct 2 19:40:42.089239 kubelet[1522]: E1002 19:40:42.089108 1522 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b": container is already in removing state; Skipping pod "cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)" Oct 2 19:40:42.089571 kubelet[1522]: E1002 19:40:42.089547 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:40:42.092399 env[1128]: time="2023-10-02T19:40:42.092286234Z" level=info msg="RemoveContainer for \"31434b57f1245dbbacffd95ec105bd99fd542e9bcf05b6318e955868218da34b\" returns successfully" Oct 2 19:40:42.752730 kubelet[1522]: E1002 19:40:42.752658 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:42.796727 kubelet[1522]: E1002 19:40:42.796677 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:42.936705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7-rootfs.mount: Deactivated successfully. Oct 2 19:40:43.797684 kubelet[1522]: E1002 19:40:43.797604 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:44.798282 kubelet[1522]: E1002 19:40:44.798181 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:45.120544 kubelet[1522]: W1002 19:40:45.118213 1522 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41d8468_c486_4f54_9489_19b4b7dd3190.slice/cri-containerd-cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7.scope WatchSource:0}: task cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7 not found: not found Oct 2 19:40:45.799018 kubelet[1522]: E1002 19:40:45.798947 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:46.799544 kubelet[1522]: E1002 19:40:46.799463 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:47.800614 kubelet[1522]: E1002 19:40:47.800555 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:48.801298 kubelet[1522]: E1002 19:40:48.801240 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:49.801937 kubelet[1522]: E1002 19:40:49.801892 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:50.802568 kubelet[1522]: E1002 19:40:50.802499 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:51.802919 kubelet[1522]: E1002 19:40:51.802852 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:52.803287 kubelet[1522]: E1002 19:40:52.803219 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:53.804186 kubelet[1522]: E1002 19:40:53.804027 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:54.804908 kubelet[1522]: E1002 19:40:54.804829 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:55.805435 kubelet[1522]: E1002 19:40:55.805362 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:56.806099 kubelet[1522]: E1002 19:40:56.806036 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:56.923729 kubelet[1522]: E1002 19:40:56.923683 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:40:57.807053 kubelet[1522]: E1002 19:40:57.806988 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:58.807549 kubelet[1522]: E1002 19:40:58.807493 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:59.808051 kubelet[1522]: E1002 19:40:59.807974 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:00.808636 kubelet[1522]: E1002 19:41:00.808566 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:01.809125 kubelet[1522]: E1002 19:41:01.809063 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:02.752628 kubelet[1522]: E1002 19:41:02.752564 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:02.810179 kubelet[1522]: E1002 19:41:02.810112 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:03.810927 kubelet[1522]: E1002 19:41:03.810861 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:04.811658 kubelet[1522]: E1002 19:41:04.811588 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:05.812159 kubelet[1522]: E1002 19:41:05.812090 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:06.813042 kubelet[1522]: E1002 19:41:06.812971 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:07.813819 kubelet[1522]: E1002 19:41:07.813748 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:08.815013 kubelet[1522]: E1002 19:41:08.814937 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:09.815638 kubelet[1522]: E1002 19:41:09.815557 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:10.816539 kubelet[1522]: E1002 19:41:10.816463 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:11.817145 kubelet[1522]: E1002 19:41:11.817061 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:11.923014 kubelet[1522]: E1002 19:41:11.922959 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:41:12.818279 kubelet[1522]: E1002 19:41:12.818177 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:13.818872 kubelet[1522]: E1002 19:41:13.818789 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:14.819788 kubelet[1522]: E1002 19:41:14.819704 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:15.820602 kubelet[1522]: E1002 19:41:15.820559 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:16.821665 kubelet[1522]: E1002 19:41:16.821614 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:17.822408 kubelet[1522]: E1002 19:41:17.822337 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:18.823224 kubelet[1522]: E1002 19:41:18.823151 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:19.823527 kubelet[1522]: E1002 19:41:19.823459 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:20.823666 kubelet[1522]: E1002 19:41:20.823589 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:21.824394 kubelet[1522]: E1002 19:41:21.824342 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:22.752362 kubelet[1522]: E1002 19:41:22.752303 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:22.824636 kubelet[1522]: E1002 19:41:22.824565 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:23.825632 kubelet[1522]: E1002 19:41:23.825543 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:24.825732 kubelet[1522]: E1002 19:41:24.825673 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:25.826656 kubelet[1522]: E1002 19:41:25.826582 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:26.827684 kubelet[1522]: E1002 19:41:26.827607 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:26.926700 env[1128]: time="2023-10-02T19:41:26.926611459Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:41:26.944730 env[1128]: time="2023-10-02T19:41:26.944659140Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\"" Oct 2 19:41:26.945465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065589805.mount: Deactivated successfully. Oct 2 19:41:26.947144 env[1128]: time="2023-10-02T19:41:26.946733173Z" level=info msg="StartContainer for \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\"" Oct 2 19:41:26.989450 systemd[1]: Started cri-containerd-10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665.scope. Oct 2 19:41:27.006093 systemd[1]: cri-containerd-10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665.scope: Deactivated successfully. Oct 2 19:41:27.012608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665-rootfs.mount: Deactivated successfully. Oct 2 19:41:27.025341 env[1128]: time="2023-10-02T19:41:27.025246984Z" level=info msg="shim disconnected" id=10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665 Oct 2 19:41:27.025341 env[1128]: time="2023-10-02T19:41:27.025338909Z" level=warning msg="cleaning up after shim disconnected" id=10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665 namespace=k8s.io Oct 2 19:41:27.025827 env[1128]: time="2023-10-02T19:41:27.025355371Z" level=info msg="cleaning up dead shim" Oct 2 19:41:27.039253 env[1128]: time="2023-10-02T19:41:27.039181012Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:41:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2019 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:41:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:41:27.039660 env[1128]: time="2023-10-02T19:41:27.039576935Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:41:27.040033 env[1128]: time="2023-10-02T19:41:27.039975261Z" level=error msg="Failed to pipe stderr of container \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\"" error="reading from a closed fifo" Oct 2 19:41:27.045610 env[1128]: time="2023-10-02T19:41:27.045539844Z" level=error msg="Failed to pipe stdout of container \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\"" error="reading from a closed fifo" Oct 2 19:41:27.047948 env[1128]: time="2023-10-02T19:41:27.047881052Z" level=error msg="StartContainer for \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:41:27.048287 kubelet[1522]: E1002 19:41:27.048241 1522 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665" Oct 2 19:41:27.048476 kubelet[1522]: E1002 19:41:27.048452 1522 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:41:27.048476 kubelet[1522]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:41:27.048476 kubelet[1522]: rm /hostbin/cilium-mount Oct 2 19:41:27.048476 kubelet[1522]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ltl9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:41:27.048822 kubelet[1522]: E1002 19:41:27.048571 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:41:27.177002 kubelet[1522]: I1002 19:41:27.175576 1522 scope.go:117] "RemoveContainer" containerID="cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7" Oct 2 19:41:27.177366 kubelet[1522]: I1002 19:41:27.177338 1522 scope.go:117] "RemoveContainer" containerID="cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7" Oct 2 19:41:27.179431 env[1128]: time="2023-10-02T19:41:27.179339514Z" level=info msg="RemoveContainer for \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\"" Oct 2 19:41:27.179838 env[1128]: time="2023-10-02T19:41:27.179797312Z" level=info msg="RemoveContainer for \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\"" Oct 2 19:41:27.180734 env[1128]: time="2023-10-02T19:41:27.180654337Z" level=error msg="RemoveContainer for \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\" failed" error="failed to set removing state for container \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\": container is already in removing state" Oct 2 19:41:27.181231 kubelet[1522]: E1002 19:41:27.181203 1522 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\": container is already in removing state" containerID="cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7" Oct 2 19:41:27.181374 kubelet[1522]: E1002 19:41:27.181253 1522 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7": container is already in removing state; Skipping pod "cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)" Oct 2 19:41:27.181806 kubelet[1522]: E1002 19:41:27.181781 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:41:27.184876 env[1128]: time="2023-10-02T19:41:27.184841396Z" level=info msg="RemoveContainer for \"cef3a9a8a87b593bad2423024dbf96a0918a12b34af634ec2b486345519e51d7\" returns successfully" Oct 2 19:41:27.828129 kubelet[1522]: E1002 19:41:27.828069 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:28.828924 kubelet[1522]: E1002 19:41:28.828870 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:29.830230 kubelet[1522]: E1002 19:41:29.830170 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:30.130854 kubelet[1522]: W1002 19:41:30.130651 1522 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41d8468_c486_4f54_9489_19b4b7dd3190.slice/cri-containerd-10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665.scope WatchSource:0}: task 10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665 not found: not found Oct 2 19:41:30.830809 kubelet[1522]: E1002 19:41:30.830725 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:31.831384 kubelet[1522]: E1002 19:41:31.831307 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:32.832330 kubelet[1522]: E1002 19:41:32.832248 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:33.832955 kubelet[1522]: E1002 19:41:33.832881 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:34.833433 kubelet[1522]: E1002 19:41:34.833362 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:35.834238 kubelet[1522]: E1002 19:41:35.834170 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:36.834634 kubelet[1522]: E1002 19:41:36.834551 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:37.835775 kubelet[1522]: E1002 19:41:37.835705 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:38.836130 kubelet[1522]: E1002 19:41:38.836057 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:39.836630 kubelet[1522]: E1002 19:41:39.836559 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:39.922417 kubelet[1522]: E1002 19:41:39.922360 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:41:40.837806 kubelet[1522]: E1002 19:41:40.837737 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:41.838224 kubelet[1522]: E1002 19:41:41.838148 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:42.751897 kubelet[1522]: E1002 19:41:42.751834 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:42.800610 kubelet[1522]: E1002 19:41:42.800542 1522 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:41:42.839198 kubelet[1522]: E1002 19:41:42.839116 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:42.885145 kubelet[1522]: E1002 19:41:42.885097 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:43.839379 kubelet[1522]: E1002 19:41:43.839299 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:44.840136 kubelet[1522]: E1002 19:41:44.840057 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:45.840849 kubelet[1522]: E1002 19:41:45.840783 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:46.841163 kubelet[1522]: E1002 19:41:46.841045 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:47.842149 kubelet[1522]: E1002 19:41:47.842072 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:47.886153 kubelet[1522]: E1002 19:41:47.886093 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:48.843271 kubelet[1522]: E1002 19:41:48.843204 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:49.843749 kubelet[1522]: E1002 19:41:49.843685 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:50.844256 kubelet[1522]: E1002 19:41:50.844175 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:51.844987 kubelet[1522]: E1002 19:41:51.844917 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:51.923309 kubelet[1522]: E1002 19:41:51.923248 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:41:52.845511 kubelet[1522]: E1002 19:41:52.845426 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:52.886701 kubelet[1522]: E1002 19:41:52.886635 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:53.846243 kubelet[1522]: E1002 19:41:53.846178 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:54.847209 kubelet[1522]: E1002 19:41:54.847141 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:55.848329 kubelet[1522]: E1002 19:41:55.848245 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:56.848684 kubelet[1522]: E1002 19:41:56.848614 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:57.848920 kubelet[1522]: E1002 19:41:57.848853 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:57.887948 kubelet[1522]: E1002 19:41:57.887908 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:58.849965 kubelet[1522]: E1002 19:41:58.849900 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:59.850996 kubelet[1522]: E1002 19:41:59.850925 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:00.852096 kubelet[1522]: E1002 19:42:00.852021 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:01.852309 kubelet[1522]: E1002 19:42:01.852230 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:02.752619 kubelet[1522]: E1002 19:42:02.752555 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:02.853248 kubelet[1522]: E1002 19:42:02.853163 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:02.888613 kubelet[1522]: E1002 19:42:02.888567 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:03.854203 kubelet[1522]: E1002 19:42:03.854126 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:04.855360 kubelet[1522]: E1002 19:42:04.855282 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:05.856045 kubelet[1522]: E1002 19:42:05.855930 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:06.856386 kubelet[1522]: E1002 19:42:06.856328 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:06.923062 kubelet[1522]: E1002 19:42:06.923015 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:42:07.857645 kubelet[1522]: E1002 19:42:07.857576 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:07.889644 kubelet[1522]: E1002 19:42:07.889596 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:08.858437 kubelet[1522]: E1002 19:42:08.858360 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:09.859309 kubelet[1522]: E1002 19:42:09.859238 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:10.860338 kubelet[1522]: E1002 19:42:10.860257 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:11.860695 kubelet[1522]: E1002 19:42:11.860615 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:12.860861 kubelet[1522]: E1002 19:42:12.860781 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:12.891250 kubelet[1522]: E1002 19:42:12.891202 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:13.861614 kubelet[1522]: E1002 19:42:13.861542 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:14.862801 kubelet[1522]: E1002 19:42:14.862721 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:15.863450 kubelet[1522]: E1002 19:42:15.863372 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:16.863920 kubelet[1522]: E1002 19:42:16.863847 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:17.864911 kubelet[1522]: E1002 19:42:17.864832 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:17.892925 kubelet[1522]: E1002 19:42:17.892872 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:18.865914 kubelet[1522]: E1002 19:42:18.865823 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:19.866554 kubelet[1522]: E1002 19:42:19.866478 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:20.866849 kubelet[1522]: E1002 19:42:20.866764 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:21.867625 kubelet[1522]: E1002 19:42:21.867549 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:21.923142 kubelet[1522]: E1002 19:42:21.923073 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:42:22.752166 kubelet[1522]: E1002 19:42:22.752097 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:22.868860 kubelet[1522]: E1002 19:42:22.868785 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:22.893957 kubelet[1522]: E1002 19:42:22.893914 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:23.869581 kubelet[1522]: E1002 19:42:23.869397 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:24.870037 kubelet[1522]: E1002 19:42:24.869974 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:25.871111 kubelet[1522]: E1002 19:42:25.871044 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:26.871741 kubelet[1522]: E1002 19:42:26.871660 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:27.872054 kubelet[1522]: E1002 19:42:27.871969 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:27.895115 kubelet[1522]: E1002 19:42:27.895062 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:28.873000 kubelet[1522]: E1002 19:42:28.872916 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:29.874144 kubelet[1522]: E1002 19:42:29.874066 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:30.874973 kubelet[1522]: E1002 19:42:30.874896 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:31.875729 kubelet[1522]: E1002 19:42:31.875594 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:32.876031 kubelet[1522]: E1002 19:42:32.875962 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:32.895806 kubelet[1522]: E1002 19:42:32.895767 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:32.923877 kubelet[1522]: E1002 19:42:32.923822 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:42:33.876951 kubelet[1522]: E1002 19:42:33.876876 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:34.877186 kubelet[1522]: E1002 19:42:34.877104 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:35.877965 kubelet[1522]: E1002 19:42:35.877883 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:36.878791 kubelet[1522]: E1002 19:42:36.878712 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:37.879849 kubelet[1522]: E1002 19:42:37.879768 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:37.896890 kubelet[1522]: E1002 19:42:37.896823 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:38.880380 kubelet[1522]: E1002 19:42:38.880311 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:39.880874 kubelet[1522]: E1002 19:42:39.880797 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:40.881405 kubelet[1522]: E1002 19:42:40.881335 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:41.881900 kubelet[1522]: E1002 19:42:41.881827 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:42.752725 kubelet[1522]: E1002 19:42:42.752632 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:42.882693 kubelet[1522]: E1002 19:42:42.882639 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:42.898101 kubelet[1522]: E1002 19:42:42.898051 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:43.883413 kubelet[1522]: E1002 19:42:43.883336 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:44.883727 kubelet[1522]: E1002 19:42:44.883650 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:45.884108 kubelet[1522]: E1002 19:42:45.884041 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:46.884671 kubelet[1522]: E1002 19:42:46.884603 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:46.923448 kubelet[1522]: E1002 19:42:46.923395 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:42:47.885556 kubelet[1522]: E1002 19:42:47.885477 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:47.899426 kubelet[1522]: E1002 19:42:47.899390 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:48.886461 kubelet[1522]: E1002 19:42:48.886387 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:49.887420 kubelet[1522]: E1002 19:42:49.887343 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:50.888363 kubelet[1522]: E1002 19:42:50.888306 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:51.889107 kubelet[1522]: E1002 19:42:51.889043 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:52.889330 kubelet[1522]: E1002 19:42:52.889255 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:52.900512 kubelet[1522]: E1002 19:42:52.900449 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:53.889566 kubelet[1522]: E1002 19:42:53.889503 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:54.890306 kubelet[1522]: E1002 19:42:54.890243 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:55.890733 kubelet[1522]: E1002 19:42:55.890673 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:56.891159 kubelet[1522]: E1002 19:42:56.891071 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:57.891402 kubelet[1522]: E1002 19:42:57.891319 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:57.902296 kubelet[1522]: E1002 19:42:57.902250 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:57.926167 env[1128]: time="2023-10-02T19:42:57.926082412Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:42:57.942645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919199562.mount: Deactivated successfully. Oct 2 19:42:57.950131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349472542.mount: Deactivated successfully. Oct 2 19:42:57.955792 env[1128]: time="2023-10-02T19:42:57.955727705Z" level=info msg="CreateContainer within sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092\"" Oct 2 19:42:57.956748 env[1128]: time="2023-10-02T19:42:57.956701147Z" level=info msg="StartContainer for \"8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092\"" Oct 2 19:42:57.990857 systemd[1]: Started cri-containerd-8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092.scope. Oct 2 19:42:58.012214 systemd[1]: cri-containerd-8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092.scope: Deactivated successfully. Oct 2 19:42:58.031417 env[1128]: time="2023-10-02T19:42:58.031320999Z" level=info msg="shim disconnected" id=8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092 Oct 2 19:42:58.031766 env[1128]: time="2023-10-02T19:42:58.031423918Z" level=warning msg="cleaning up after shim disconnected" id=8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092 namespace=k8s.io Oct 2 19:42:58.031766 env[1128]: time="2023-10-02T19:42:58.031441266Z" level=info msg="cleaning up dead shim" Oct 2 19:42:58.045297 env[1128]: time="2023-10-02T19:42:58.045194408Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2071 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:58.045730 env[1128]: time="2023-10-02T19:42:58.045630147Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:42:58.046631 env[1128]: time="2023-10-02T19:42:58.046569442Z" level=error msg="Failed to pipe stdout of container \"8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092\"" error="reading from a closed fifo" Oct 2 19:42:58.046937 env[1128]: time="2023-10-02T19:42:58.046822288Z" level=error msg="Failed to pipe stderr of container \"8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092\"" error="reading from a closed fifo" Oct 2 19:42:58.049221 env[1128]: time="2023-10-02T19:42:58.049160079Z" level=error msg="StartContainer for \"8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:58.049662 kubelet[1522]: E1002 19:42:58.049605 1522 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092" Oct 2 19:42:58.049832 kubelet[1522]: E1002 19:42:58.049803 1522 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:58.049832 kubelet[1522]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:58.049832 kubelet[1522]: rm /hostbin/cilium-mount Oct 2 19:42:58.049832 kubelet[1522]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ltl9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:58.050117 kubelet[1522]: E1002 19:42:58.049875 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:42:58.363323 kubelet[1522]: I1002 19:42:58.362808 1522 scope.go:117] "RemoveContainer" containerID="10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665" Oct 2 19:42:58.363323 kubelet[1522]: I1002 19:42:58.363279 1522 scope.go:117] "RemoveContainer" containerID="10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665" Oct 2 19:42:58.365639 env[1128]: time="2023-10-02T19:42:58.365582639Z" level=info msg="RemoveContainer for \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\"" Oct 2 19:42:58.366100 env[1128]: time="2023-10-02T19:42:58.365651906Z" level=info msg="RemoveContainer for \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\"" Oct 2 19:42:58.368184 env[1128]: time="2023-10-02T19:42:58.368107753Z" level=error msg="RemoveContainer for \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\" failed" error="failed to set removing state for container \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\": container is already in removing state" Oct 2 19:42:58.368595 kubelet[1522]: E1002 19:42:58.368557 1522 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\": container is already in removing state" containerID="10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665" Oct 2 19:42:58.368771 kubelet[1522]: I1002 19:42:58.368710 1522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665"} err="rpc error: code = Unknown desc = failed to set removing state for container \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\": container is already in removing state" Oct 2 19:42:58.372016 env[1128]: time="2023-10-02T19:42:58.371967115Z" level=info msg="RemoveContainer for \"10e60daffe0b0d8cb5ca4e2cc420c0192e2e3ca8576a70f5061fa239f2871665\" returns successfully" Oct 2 19:42:58.372750 kubelet[1522]: E1002 19:42:58.372721 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-qt9fs_kube-system(e41d8468-c486-4f54-9489-19b4b7dd3190)\"" pod="kube-system/cilium-qt9fs" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" Oct 2 19:42:58.891824 kubelet[1522]: E1002 19:42:58.891749 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:58.938675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092-rootfs.mount: Deactivated successfully. Oct 2 19:42:59.892440 kubelet[1522]: E1002 19:42:59.892374 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:00.893128 kubelet[1522]: E1002 19:43:00.893050 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:01.136457 kubelet[1522]: W1002 19:43:01.136391 1522 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41d8468_c486_4f54_9489_19b4b7dd3190.slice/cri-containerd-8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092.scope WatchSource:0}: task 8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092 not found: not found Oct 2 19:43:01.547527 env[1128]: time="2023-10-02T19:43:01.547441107Z" level=info msg="StopPodSandbox for \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\"" Oct 2 19:43:01.550558 env[1128]: time="2023-10-02T19:43:01.547548866Z" level=info msg="Container to stop \"8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:43:01.549871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c-shm.mount: Deactivated successfully. Oct 2 19:43:01.560168 systemd[1]: cri-containerd-1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c.scope: Deactivated successfully. Oct 2 19:43:01.572849 kernel: kauditd_printk_skb: 190 callbacks suppressed Oct 2 19:43:01.572985 kernel: audit: type=1334 audit(1696275781.560:703): prog-id=70 op=UNLOAD Oct 2 19:43:01.560000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:43:01.574000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:43:01.582548 kernel: audit: type=1334 audit(1696275781.574:704): prog-id=73 op=UNLOAD Oct 2 19:43:01.602214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c-rootfs.mount: Deactivated successfully. Oct 2 19:43:01.617778 env[1128]: time="2023-10-02T19:43:01.617711367Z" level=info msg="shim disconnected" id=1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c Oct 2 19:43:01.618059 env[1128]: time="2023-10-02T19:43:01.617783338Z" level=warning msg="cleaning up after shim disconnected" id=1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c namespace=k8s.io Oct 2 19:43:01.618059 env[1128]: time="2023-10-02T19:43:01.617799367Z" level=info msg="cleaning up dead shim" Oct 2 19:43:01.629848 env[1128]: time="2023-10-02T19:43:01.629781999Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2104 runtime=io.containerd.runc.v2\n" Oct 2 19:43:01.630239 env[1128]: time="2023-10-02T19:43:01.630200518Z" level=info msg="TearDown network for sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" successfully" Oct 2 19:43:01.630239 env[1128]: time="2023-10-02T19:43:01.630237716Z" level=info msg="StopPodSandbox for \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" returns successfully" Oct 2 19:43:01.793390 kubelet[1522]: I1002 19:43:01.793290 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-run\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.793390 kubelet[1522]: I1002 19:43:01.793353 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-host-proc-sys-kernel\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.793390 kubelet[1522]: I1002 19:43:01.793350 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:01.794000 kubelet[1522]: I1002 19:43:01.793393 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-config-path\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794000 kubelet[1522]: I1002 19:43:01.793641 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e41d8468-c486-4f54-9489-19b4b7dd3190-hubble-tls\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794000 kubelet[1522]: I1002 19:43:01.793674 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-hostproc\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794664 kubelet[1522]: I1002 19:43:01.794242 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltl9w\" (UniqueName: \"kubernetes.io/projected/e41d8468-c486-4f54-9489-19b4b7dd3190-kube-api-access-ltl9w\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794664 kubelet[1522]: I1002 19:43:01.794308 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-bpf-maps\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794664 kubelet[1522]: I1002 19:43:01.794342 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-cgroup\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794664 kubelet[1522]: I1002 19:43:01.794376 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e41d8468-c486-4f54-9489-19b4b7dd3190-clustermesh-secrets\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794664 kubelet[1522]: I1002 19:43:01.794407 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cni-path\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794664 kubelet[1522]: I1002 19:43:01.794454 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-etc-cni-netd\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794664 kubelet[1522]: I1002 19:43:01.794501 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-xtables-lock\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794664 kubelet[1522]: I1002 19:43:01.794535 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-lib-modules\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794664 kubelet[1522]: I1002 19:43:01.794575 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-host-proc-sys-net\") pod \"e41d8468-c486-4f54-9489-19b4b7dd3190\" (UID: \"e41d8468-c486-4f54-9489-19b4b7dd3190\") " Oct 2 19:43:01.794664 kubelet[1522]: I1002 19:43:01.794611 1522 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-run\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.795858 kubelet[1522]: I1002 19:43:01.794643 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:01.796598 kubelet[1522]: I1002 19:43:01.796559 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:43:01.796728 kubelet[1522]: I1002 19:43:01.796626 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:01.797897 kubelet[1522]: I1002 19:43:01.797787 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cni-path" (OuterVolumeSpecName: "cni-path") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:01.797897 kubelet[1522]: I1002 19:43:01.797836 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:01.797897 kubelet[1522]: I1002 19:43:01.797863 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:01.798202 kubelet[1522]: I1002 19:43:01.798180 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-hostproc" (OuterVolumeSpecName: "hostproc") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:01.799782 kubelet[1522]: I1002 19:43:01.798346 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:01.799782 kubelet[1522]: I1002 19:43:01.799712 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:01.799974 kubelet[1522]: I1002 19:43:01.799809 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:01.804781 systemd[1]: var-lib-kubelet-pods-e41d8468\x2dc486\x2d4f54\x2d9489\x2d19b4b7dd3190-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:43:01.805906 kubelet[1522]: I1002 19:43:01.805868 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41d8468-c486-4f54-9489-19b4b7dd3190-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:43:01.810770 systemd[1]: var-lib-kubelet-pods-e41d8468\x2dc486\x2d4f54\x2d9489\x2d19b4b7dd3190-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:43:01.811877 kubelet[1522]: I1002 19:43:01.811845 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41d8468-c486-4f54-9489-19b4b7dd3190-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:01.812108 kubelet[1522]: I1002 19:43:01.812046 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41d8468-c486-4f54-9489-19b4b7dd3190-kube-api-access-ltl9w" (OuterVolumeSpecName: "kube-api-access-ltl9w") pod "e41d8468-c486-4f54-9489-19b4b7dd3190" (UID: "e41d8468-c486-4f54-9489-19b4b7dd3190"). InnerVolumeSpecName "kube-api-access-ltl9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:01.812981 systemd[1]: var-lib-kubelet-pods-e41d8468\x2dc486\x2d4f54\x2d9489\x2d19b4b7dd3190-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dltl9w.mount: Deactivated successfully. Oct 2 19:43:01.893234 kubelet[1522]: E1002 19:43:01.893165 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:01.895507 kubelet[1522]: I1002 19:43:01.895452 1522 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-hostproc\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895507 kubelet[1522]: I1002 19:43:01.895505 1522 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ltl9w\" (UniqueName: \"kubernetes.io/projected/e41d8468-c486-4f54-9489-19b4b7dd3190-kube-api-access-ltl9w\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895523 1522 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-bpf-maps\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895538 1522 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-cgroup\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895567 1522 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e41d8468-c486-4f54-9489-19b4b7dd3190-clustermesh-secrets\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895586 1522 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e41d8468-c486-4f54-9489-19b4b7dd3190-hubble-tls\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895600 1522 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-cni-path\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895617 1522 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-etc-cni-netd\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895634 1522 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-xtables-lock\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895647 1522 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-lib-modules\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895664 1522 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-host-proc-sys-net\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895682 1522 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e41d8468-c486-4f54-9489-19b4b7dd3190-host-proc-sys-kernel\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:01.895813 kubelet[1522]: I1002 19:43:01.895699 1522 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e41d8468-c486-4f54-9489-19b4b7dd3190-cilium-config-path\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:43:02.374876 kubelet[1522]: I1002 19:43:02.374844 1522 scope.go:117] "RemoveContainer" containerID="8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092" Oct 2 19:43:02.379640 systemd[1]: Removed slice kubepods-burstable-pode41d8468_c486_4f54_9489_19b4b7dd3190.slice. Oct 2 19:43:02.381867 env[1128]: time="2023-10-02T19:43:02.381820844Z" level=info msg="RemoveContainer for \"8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092\"" Oct 2 19:43:02.391470 env[1128]: time="2023-10-02T19:43:02.391393531Z" level=info msg="RemoveContainer for \"8f97d64e0a2cf02053ed253d8dc8d989dd695b9eded68328518d1213589b4092\" returns successfully" Oct 2 19:43:02.752311 kubelet[1522]: E1002 19:43:02.752248 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:02.893472 kubelet[1522]: E1002 19:43:02.893402 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:02.903546 kubelet[1522]: E1002 19:43:02.903472 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:02.925517 kubelet[1522]: I1002 19:43:02.925459 1522 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" path="/var/lib/kubelet/pods/e41d8468-c486-4f54-9489-19b4b7dd3190/volumes" Oct 2 19:43:03.894375 kubelet[1522]: E1002 19:43:03.894301 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:04.679930 kubelet[1522]: I1002 19:43:04.679868 1522 topology_manager.go:215] "Topology Admit Handler" podUID="7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-26wd2" Oct 2 19:43:04.679930 kubelet[1522]: E1002 19:43:04.679950 1522 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.680338 kubelet[1522]: E1002 19:43:04.679967 1522 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.680338 kubelet[1522]: E1002 19:43:04.679979 1522 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.680338 kubelet[1522]: E1002 19:43:04.679989 1522 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.680338 kubelet[1522]: E1002 19:43:04.680003 1522 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.680338 kubelet[1522]: I1002 19:43:04.680030 1522 memory_manager.go:346] "RemoveStaleState removing state" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.680338 kubelet[1522]: I1002 19:43:04.680040 1522 memory_manager.go:346] "RemoveStaleState removing state" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.680338 kubelet[1522]: I1002 19:43:04.680049 1522 memory_manager.go:346] "RemoveStaleState removing state" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.680338 kubelet[1522]: I1002 19:43:04.680059 1522 memory_manager.go:346] "RemoveStaleState removing state" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.687260 systemd[1]: Created slice kubepods-besteffort-pod7aee739b_cc9c_4d54_a10d_3b3dcfb5b81c.slice. Oct 2 19:43:04.691639 kubelet[1522]: W1002 19:43:04.691602 1522 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.128.0.92" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.92' and this object Oct 2 19:43:04.691639 kubelet[1522]: E1002 19:43:04.691650 1522 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.128.0.92" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.92' and this object Oct 2 19:43:04.712480 kubelet[1522]: I1002 19:43:04.712426 1522 topology_manager.go:215] "Topology Admit Handler" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" podNamespace="kube-system" podName="cilium-pwkkg" Oct 2 19:43:04.712801 kubelet[1522]: E1002 19:43:04.712533 1522 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.712801 kubelet[1522]: I1002 19:43:04.712606 1522 memory_manager.go:346] "RemoveStaleState removing state" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.712801 kubelet[1522]: I1002 19:43:04.712652 1522 memory_manager.go:346] "RemoveStaleState removing state" podUID="e41d8468-c486-4f54-9489-19b4b7dd3190" containerName="mount-cgroup" Oct 2 19:43:04.714504 kubelet[1522]: I1002 19:43:04.714439 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-26wd2\" (UID: \"7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c\") " pod="kube-system/cilium-operator-6bc8ccdb58-26wd2" Oct 2 19:43:04.714694 kubelet[1522]: I1002 19:43:04.714541 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfpst\" (UniqueName: \"kubernetes.io/projected/7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c-kube-api-access-jfpst\") pod \"cilium-operator-6bc8ccdb58-26wd2\" (UID: \"7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c\") " pod="kube-system/cilium-operator-6bc8ccdb58-26wd2" Oct 2 19:43:04.720501 systemd[1]: Created slice kubepods-burstable-podce3e93f2_a296_476f_867e_01304b1d1131.slice. Oct 2 19:43:04.814932 kubelet[1522]: I1002 19:43:04.814869 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-etc-cni-netd\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815376 kubelet[1522]: I1002 19:43:04.815348 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-config-path\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815524 kubelet[1522]: I1002 19:43:04.815400 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-host-proc-sys-net\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815524 kubelet[1522]: I1002 19:43:04.815435 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-ipsec-secrets\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815673 kubelet[1522]: I1002 19:43:04.815543 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-run\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815673 kubelet[1522]: I1002 19:43:04.815581 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-hostproc\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815673 kubelet[1522]: I1002 19:43:04.815621 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-clustermesh-secrets\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815673 kubelet[1522]: I1002 19:43:04.815665 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-bpf-maps\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815906 kubelet[1522]: I1002 19:43:04.815701 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cni-path\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815906 kubelet[1522]: I1002 19:43:04.815740 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-lib-modules\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815906 kubelet[1522]: I1002 19:43:04.815774 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-xtables-lock\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815906 kubelet[1522]: I1002 19:43:04.815809 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-host-proc-sys-kernel\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815906 kubelet[1522]: I1002 19:43:04.815849 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce3e93f2-a296-476f-867e-01304b1d1131-hubble-tls\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.815906 kubelet[1522]: I1002 19:43:04.815884 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-cgroup\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.816209 kubelet[1522]: I1002 19:43:04.815922 1522 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pwlf\" (UniqueName: \"kubernetes.io/projected/ce3e93f2-a296-476f-867e-01304b1d1131-kube-api-access-5pwlf\") pod \"cilium-pwkkg\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " pod="kube-system/cilium-pwkkg" Oct 2 19:43:04.895453 kubelet[1522]: E1002 19:43:04.895396 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:05.892474 env[1128]: time="2023-10-02T19:43:05.892388224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-26wd2,Uid:7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c,Namespace:kube-system,Attempt:0,}" Oct 2 19:43:05.896564 kubelet[1522]: E1002 19:43:05.896517 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:05.916662 env[1128]: time="2023-10-02T19:43:05.916536669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:43:05.916662 env[1128]: time="2023-10-02T19:43:05.916595522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:43:05.916662 env[1128]: time="2023-10-02T19:43:05.916614666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:43:05.917279 env[1128]: time="2023-10-02T19:43:05.917214229Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2 pid=2132 runtime=io.containerd.runc.v2 Oct 2 19:43:05.928850 env[1128]: time="2023-10-02T19:43:05.928745229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwkkg,Uid:ce3e93f2-a296-476f-867e-01304b1d1131,Namespace:kube-system,Attempt:0,}" Oct 2 19:43:05.956193 systemd[1]: Started cri-containerd-169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2.scope. Oct 2 19:43:05.971791 env[1128]: time="2023-10-02T19:43:05.971694112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:43:05.972150 env[1128]: time="2023-10-02T19:43:05.972096709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:43:05.972434 env[1128]: time="2023-10-02T19:43:05.972360498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:43:05.972846 env[1128]: time="2023-10-02T19:43:05.972798732Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191 pid=2164 runtime=io.containerd.runc.v2 Oct 2 19:43:06.008564 kernel: audit: type=1400 audit(1696275785.987:705): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:05.987000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:05.987000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.038930 kernel: audit: type=1400 audit(1696275785.987:706): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.050849 systemd[1]: Started cri-containerd-c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191.scope. Oct 2 19:43:05.987000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.079698 kernel: audit: type=1400 audit(1696275785.987:707): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.123540 kernel: audit: type=1400 audit(1696275785.987:708): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:05.987000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:05.987000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.172554 kernel: audit: type=1400 audit(1696275785.987:709): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.172796 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:43:06.172856 kernel: audit: type=1400 audit(1696275785.987:710): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.172898 kernel: audit: audit_lost=39 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:43:05.987000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:05.987000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:05.987000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:05.987000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.013000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.013000 audit: BPF prog-id=81 op=LOAD Oct 2 19:43:06.013000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.013000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2132 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:06.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396166393866653834383335396635653934356530346131306365 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2132 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:06.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396166393866653834383335396635653934356530346131306365 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit: BPF prog-id=82 op=LOAD Oct 2 19:43:06.014000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0002a7ca0 items=0 ppid=2132 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:06.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396166393866653834383335396635653934356530346131306365 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit: BPF prog-id=83 op=LOAD Oct 2 19:43:06.014000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0002a7ce8 items=0 ppid=2132 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:06.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396166393866653834383335396635653934356530346131306365 Oct 2 19:43:06.014000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:43:06.014000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { perfmon } for pid=2144 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit[2144]: AVC avc: denied { bpf } for pid=2144 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.014000 audit: BPF prog-id=84 op=LOAD Oct 2 19:43:06.014000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0003180f8 items=0 ppid=2132 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:06.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136396166393866653834383335396635653934356530346131306365 Oct 2 19:43:06.128000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.129000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.129000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.129000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.129000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.129000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.129000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.129000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.182160 env[1128]: time="2023-10-02T19:43:06.182084884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-26wd2,Uid:7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c,Namespace:kube-system,Attempt:0,} returns sandbox id \"169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2\"" Oct 2 19:43:06.181000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.181000 audit: BPF prog-id=85 op=LOAD Oct 2 19:43:06.182000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.182000 audit[2179]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2164 pid=2179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:06.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393833353634323139316132316136333262626364346636633963 Oct 2 19:43:06.182000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.182000 audit[2179]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2164 pid=2179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:06.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393833353634323139316132316136333262626364346636633963 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit: BPF prog-id=86 op=LOAD Oct 2 19:43:06.183000 audit[2179]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002abc40 items=0 ppid=2164 pid=2179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:06.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393833353634323139316132316136333262626364346636633963 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit: BPF prog-id=87 op=LOAD Oct 2 19:43:06.183000 audit[2179]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0002abc88 items=0 ppid=2164 pid=2179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:06.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393833353634323139316132316136333262626364346636633963 Oct 2 19:43:06.183000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:43:06.183000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { perfmon } for pid=2179 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit[2179]: AVC avc: denied { bpf } for pid=2179 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:06.183000 audit: BPF prog-id=88 op=LOAD Oct 2 19:43:06.183000 audit[2179]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0002fa098 items=0 ppid=2164 pid=2179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:06.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333393833353634323139316132316136333262626364346636633963 Oct 2 19:43:06.195793 kubelet[1522]: E1002 19:43:06.195573 1522 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Oct 2 19:43:06.196885 env[1128]: time="2023-10-02T19:43:06.196834641Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:43:06.209735 env[1128]: time="2023-10-02T19:43:06.209663345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwkkg,Uid:ce3e93f2-a296-476f-867e-01304b1d1131,Namespace:kube-system,Attempt:0,} returns sandbox id \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\"" Oct 2 19:43:06.215279 env[1128]: time="2023-10-02T19:43:06.215224409Z" level=info msg="CreateContainer within sandbox \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:43:06.232257 env[1128]: time="2023-10-02T19:43:06.232193570Z" level=info msg="CreateContainer within sandbox \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\"" Oct 2 19:43:06.233038 env[1128]: time="2023-10-02T19:43:06.232985035Z" level=info msg="StartContainer for \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\"" Oct 2 19:43:06.256730 systemd[1]: Started cri-containerd-ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247.scope. Oct 2 19:43:06.280323 systemd[1]: cri-containerd-ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247.scope: Deactivated successfully. Oct 2 19:43:06.298837 env[1128]: time="2023-10-02T19:43:06.298757926Z" level=info msg="shim disconnected" id=ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247 Oct 2 19:43:06.298837 env[1128]: time="2023-10-02T19:43:06.298841394Z" level=warning msg="cleaning up after shim disconnected" id=ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247 namespace=k8s.io Oct 2 19:43:06.299261 env[1128]: time="2023-10-02T19:43:06.298856416Z" level=info msg="cleaning up dead shim" Oct 2 19:43:06.312910 env[1128]: time="2023-10-02T19:43:06.312834992Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2230 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:06.313287 env[1128]: time="2023-10-02T19:43:06.313201897Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 19:43:06.313669 env[1128]: time="2023-10-02T19:43:06.313600471Z" level=error msg="Failed to pipe stdout of container \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\"" error="reading from a closed fifo" Oct 2 19:43:06.314601 env[1128]: time="2023-10-02T19:43:06.314547889Z" level=error msg="Failed to pipe stderr of container \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\"" error="reading from a closed fifo" Oct 2 19:43:06.316880 env[1128]: time="2023-10-02T19:43:06.316810199Z" level=error msg="StartContainer for \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:06.317207 kubelet[1522]: E1002 19:43:06.317167 1522 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247" Oct 2 19:43:06.317385 kubelet[1522]: E1002 19:43:06.317343 1522 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:06.317385 kubelet[1522]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:06.317385 kubelet[1522]: rm /hostbin/cilium-mount Oct 2 19:43:06.317385 kubelet[1522]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5pwlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:06.317864 kubelet[1522]: E1002 19:43:06.317410 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pwkkg" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" Oct 2 19:43:06.393942 env[1128]: time="2023-10-02T19:43:06.393837567Z" level=info msg="CreateContainer within sandbox \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:43:06.413298 env[1128]: time="2023-10-02T19:43:06.413228208Z" level=info msg="CreateContainer within sandbox \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\"" Oct 2 19:43:06.414286 env[1128]: time="2023-10-02T19:43:06.414229911Z" level=info msg="StartContainer for \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\"" Oct 2 19:43:06.441093 systemd[1]: Started cri-containerd-046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3.scope. Oct 2 19:43:06.460075 systemd[1]: cri-containerd-046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3.scope: Deactivated successfully. Oct 2 19:43:06.471732 env[1128]: time="2023-10-02T19:43:06.471629721Z" level=info msg="shim disconnected" id=046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3 Oct 2 19:43:06.471732 env[1128]: time="2023-10-02T19:43:06.471720746Z" level=warning msg="cleaning up after shim disconnected" id=046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3 namespace=k8s.io Oct 2 19:43:06.471732 env[1128]: time="2023-10-02T19:43:06.471737419Z" level=info msg="cleaning up dead shim" Oct 2 19:43:06.484899 env[1128]: time="2023-10-02T19:43:06.484835627Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2267 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:06.485330 env[1128]: time="2023-10-02T19:43:06.485217817Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Oct 2 19:43:06.485820 env[1128]: time="2023-10-02T19:43:06.485756779Z" level=error msg="Failed to pipe stdout of container \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\"" error="reading from a closed fifo" Oct 2 19:43:06.486611 env[1128]: time="2023-10-02T19:43:06.486556184Z" level=error msg="Failed to pipe stderr of container \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\"" error="reading from a closed fifo" Oct 2 19:43:06.488935 env[1128]: time="2023-10-02T19:43:06.488874257Z" level=error msg="StartContainer for \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:06.489314 kubelet[1522]: E1002 19:43:06.489283 1522 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3" Oct 2 19:43:06.489978 kubelet[1522]: E1002 19:43:06.489475 1522 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:06.489978 kubelet[1522]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:06.489978 kubelet[1522]: rm /hostbin/cilium-mount Oct 2 19:43:06.489978 kubelet[1522]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5pwlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:06.489978 kubelet[1522]: E1002 19:43:06.489619 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pwkkg" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" Oct 2 19:43:06.897592 kubelet[1522]: E1002 19:43:06.897411 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:07.394885 kubelet[1522]: I1002 19:43:07.394228 1522 scope.go:117] "RemoveContainer" containerID="ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247" Oct 2 19:43:07.394885 kubelet[1522]: I1002 19:43:07.394849 1522 scope.go:117] "RemoveContainer" containerID="ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247" Oct 2 19:43:07.396717 env[1128]: time="2023-10-02T19:43:07.396653345Z" level=info msg="RemoveContainer for \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\"" Oct 2 19:43:07.397249 env[1128]: time="2023-10-02T19:43:07.397030940Z" level=info msg="RemoveContainer for \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\"" Oct 2 19:43:07.397249 env[1128]: time="2023-10-02T19:43:07.397136743Z" level=error msg="RemoveContainer for \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\" failed" error="failed to set removing state for container \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\": container is already in removing state" Oct 2 19:43:07.397644 kubelet[1522]: E1002 19:43:07.397619 1522 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\": container is already in removing state" containerID="ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247" Oct 2 19:43:07.398337 kubelet[1522]: E1002 19:43:07.397666 1522 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247": container is already in removing state; Skipping pod "cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131)" Oct 2 19:43:07.398337 kubelet[1522]: E1002 19:43:07.398103 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131)\"" pod="kube-system/cilium-pwkkg" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" Oct 2 19:43:07.405011 env[1128]: time="2023-10-02T19:43:07.404968181Z" level=info msg="RemoveContainer for \"ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247\" returns successfully" Oct 2 19:43:07.424232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3328763817.mount: Deactivated successfully. Oct 2 19:43:07.898450 kubelet[1522]: E1002 19:43:07.898356 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:07.905241 kubelet[1522]: E1002 19:43:07.905158 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:08.197740 env[1128]: time="2023-10-02T19:43:08.197667188Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:08.200169 env[1128]: time="2023-10-02T19:43:08.200125328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:08.202234 env[1128]: time="2023-10-02T19:43:08.202198653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:08.203417 env[1128]: time="2023-10-02T19:43:08.203363057Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 2 19:43:08.206434 env[1128]: time="2023-10-02T19:43:08.206384758Z" level=info msg="CreateContainer within sandbox \"169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:43:08.224417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142754839.mount: Deactivated successfully. Oct 2 19:43:08.234815 env[1128]: time="2023-10-02T19:43:08.234752002Z" level=info msg="CreateContainer within sandbox \"169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\"" Oct 2 19:43:08.235774 env[1128]: time="2023-10-02T19:43:08.235720367Z" level=info msg="StartContainer for \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\"" Oct 2 19:43:08.264721 systemd[1]: Started cri-containerd-615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10.scope. Oct 2 19:43:08.315162 kernel: kauditd_printk_skb: 108 callbacks suppressed Oct 2 19:43:08.315355 kernel: audit: type=1400 audit(1696275788.287:740): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.315407 kernel: audit: type=1400 audit(1696275788.287:741): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.355748 kernel: audit: type=1400 audit(1696275788.287:742): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.356016 kernel: audit: type=1400 audit(1696275788.287:743): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397279 kernel: audit: type=1400 audit(1696275788.287:744): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.445325 kernel: audit: type=1400 audit(1696275788.287:745): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.445510 kernel: audit: type=1400 audit(1696275788.287:746): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.468519 kernel: audit: type=1400 audit(1696275788.287:747): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.468908 env[1128]: time="2023-10-02T19:43:08.468856439Z" level=info msg="StartContainer for \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\" returns successfully" Oct 2 19:43:08.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.510726 kernel: audit: type=1400 audit(1696275788.287:748): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.510890 kernel: audit: type=1400 audit(1696275788.287:749): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.287000 audit: BPF prog-id=89 op=LOAD Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2132 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:08.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356535346537303030383733653738663437343665336466396536 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2132 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:08.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356535346537303030383733653738663437343665336466396536 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.288000 audit: BPF prog-id=90 op=LOAD Oct 2 19:43:08.288000 audit[2288]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000286a00 items=0 ppid=2132 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:08.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356535346537303030383733653738663437343665336466396536 Oct 2 19:43:08.334000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.334000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.334000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.334000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.334000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.334000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.334000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.334000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.334000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.334000 audit: BPF prog-id=91 op=LOAD Oct 2 19:43:08.334000 audit[2288]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000286a48 items=0 ppid=2132 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:08.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356535346537303030383733653738663437343665336466396536 Oct 2 19:43:08.397000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:43:08.397000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:43:08.397000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397000 audit[2288]: AVC avc: denied { perfmon } for pid=2288 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397000 audit[2288]: AVC avc: denied { bpf } for pid=2288 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:08.397000 audit: BPF prog-id=92 op=LOAD Oct 2 19:43:08.397000 audit[2288]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000286e58 items=0 ppid=2132 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:08.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356535346537303030383733653738663437343665336466396536 Oct 2 19:43:08.467000 audit[2299]: AVC avc: denied { map_create } for pid=2299 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c282,c530 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c282,c530 tclass=bpf permissive=0 Oct 2 19:43:08.467000 audit[2299]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0005357d0 a2=48 a3=c0005357c0 items=0 ppid=2132 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c282,c530 key=(null) Oct 2 19:43:08.467000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:43:08.898873 kubelet[1522]: E1002 19:43:08.898720 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:09.404566 kubelet[1522]: W1002 19:43:09.404503 1522 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3e93f2_a296_476f_867e_01304b1d1131.slice/cri-containerd-ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247.scope WatchSource:0}: container "ef002de11b568768f6a8560e0ecf7d31b9de90430c039ed6fd94342e0a28d247" in namespace "k8s.io": not found Oct 2 19:43:09.423173 kubelet[1522]: I1002 19:43:09.423122 1522 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-26wd2" podStartSLOduration=3.408412082 podCreationTimestamp="2023-10-02 19:43:04 +0000 UTC" firstStartedPulling="2023-10-02 19:43:06.189202617 +0000 UTC m=+204.491162129" lastFinishedPulling="2023-10-02 19:43:08.20386152 +0000 UTC m=+206.505821037" observedRunningTime="2023-10-02 19:43:09.422752119 +0000 UTC m=+207.724711653" watchObservedRunningTime="2023-10-02 19:43:09.42307099 +0000 UTC m=+207.725030525" Oct 2 19:43:09.899120 kubelet[1522]: E1002 19:43:09.899049 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:10.899762 kubelet[1522]: E1002 19:43:10.899693 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:11.900276 kubelet[1522]: E1002 19:43:11.900201 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:12.518544 kubelet[1522]: W1002 19:43:12.518457 1522 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3e93f2_a296_476f_867e_01304b1d1131.slice/cri-containerd-046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3.scope WatchSource:0}: task 046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3 not found: not found Oct 2 19:43:12.901276 kubelet[1522]: E1002 19:43:12.901087 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:12.906465 kubelet[1522]: E1002 19:43:12.906323 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:13.902051 kubelet[1522]: E1002 19:43:13.901984 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:14.903102 kubelet[1522]: E1002 19:43:14.903031 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:15.903980 kubelet[1522]: E1002 19:43:15.903910 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:16.905072 kubelet[1522]: E1002 19:43:16.905000 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:17.905986 kubelet[1522]: E1002 19:43:17.905920 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:17.907837 kubelet[1522]: E1002 19:43:17.907806 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:18.907178 kubelet[1522]: E1002 19:43:18.907096 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:19.908034 kubelet[1522]: E1002 19:43:19.907965 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:20.908271 kubelet[1522]: E1002 19:43:20.908203 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:21.908462 kubelet[1522]: E1002 19:43:21.908382 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:22.751812 kubelet[1522]: E1002 19:43:22.751744 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:22.908648 kubelet[1522]: E1002 19:43:22.908599 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:22.909666 kubelet[1522]: E1002 19:43:22.909633 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:22.931923 env[1128]: time="2023-10-02T19:43:22.931845179Z" level=info msg="CreateContainer within sandbox \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:43:22.954760 env[1128]: time="2023-10-02T19:43:22.954678989Z" level=info msg="CreateContainer within sandbox \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\"" Oct 2 19:43:22.955894 env[1128]: time="2023-10-02T19:43:22.955841224Z" level=info msg="StartContainer for \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\"" Oct 2 19:43:22.992137 systemd[1]: run-containerd-runc-k8s.io-c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d-runc.7kLpnq.mount: Deactivated successfully. Oct 2 19:43:22.998113 systemd[1]: Started cri-containerd-c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d.scope. Oct 2 19:43:23.016026 systemd[1]: cri-containerd-c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d.scope: Deactivated successfully. Oct 2 19:43:23.224634 env[1128]: time="2023-10-02T19:43:23.224527511Z" level=info msg="shim disconnected" id=c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d Oct 2 19:43:23.224634 env[1128]: time="2023-10-02T19:43:23.224616464Z" level=warning msg="cleaning up after shim disconnected" id=c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d namespace=k8s.io Oct 2 19:43:23.224634 env[1128]: time="2023-10-02T19:43:23.224632913Z" level=info msg="cleaning up dead shim" Oct 2 19:43:23.239625 env[1128]: time="2023-10-02T19:43:23.239537088Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2341 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:23.240112 env[1128]: time="2023-10-02T19:43:23.239934541Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:43:23.241571 env[1128]: time="2023-10-02T19:43:23.241478967Z" level=error msg="Failed to pipe stdout of container \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\"" error="reading from a closed fifo" Oct 2 19:43:23.241853 env[1128]: time="2023-10-02T19:43:23.241590350Z" level=error msg="Failed to pipe stderr of container \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\"" error="reading from a closed fifo" Oct 2 19:43:23.244141 env[1128]: time="2023-10-02T19:43:23.244078771Z" level=error msg="StartContainer for \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:23.244504 kubelet[1522]: E1002 19:43:23.244423 1522 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d" Oct 2 19:43:23.244682 kubelet[1522]: E1002 19:43:23.244642 1522 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:23.244682 kubelet[1522]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:23.244682 kubelet[1522]: rm /hostbin/cilium-mount Oct 2 19:43:23.244682 kubelet[1522]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5pwlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:23.245040 kubelet[1522]: E1002 19:43:23.244720 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pwkkg" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" Oct 2 19:43:23.443524 kubelet[1522]: I1002 19:43:23.443336 1522 scope.go:117] "RemoveContainer" containerID="046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3" Oct 2 19:43:23.443905 kubelet[1522]: I1002 19:43:23.443878 1522 scope.go:117] "RemoveContainer" containerID="046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3" Oct 2 19:43:23.445461 env[1128]: time="2023-10-02T19:43:23.445408081Z" level=info msg="RemoveContainer for \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\"" Oct 2 19:43:23.446326 env[1128]: time="2023-10-02T19:43:23.446279814Z" level=info msg="RemoveContainer for \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\"" Oct 2 19:43:23.446450 env[1128]: time="2023-10-02T19:43:23.446395950Z" level=error msg="RemoveContainer for \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\" failed" error="failed to set removing state for container \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\": container is already in removing state" Oct 2 19:43:23.446666 kubelet[1522]: E1002 19:43:23.446636 1522 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\": container is already in removing state" containerID="046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3" Oct 2 19:43:23.446772 kubelet[1522]: E1002 19:43:23.446680 1522 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3": container is already in removing state; Skipping pod "cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131)" Oct 2 19:43:23.447165 kubelet[1522]: E1002 19:43:23.447142 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131)\"" pod="kube-system/cilium-pwkkg" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" Oct 2 19:43:23.452586 env[1128]: time="2023-10-02T19:43:23.452523863Z" level=info msg="RemoveContainer for \"046ff9dd499f795d8b9def34a7de5c95a75cd846fe98ea48a97df7c3bd6f70f3\" returns successfully" Oct 2 19:43:23.909604 kubelet[1522]: E1002 19:43:23.909445 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:23.946143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d-rootfs.mount: Deactivated successfully. Oct 2 19:43:24.910098 kubelet[1522]: E1002 19:43:24.910028 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:25.910438 kubelet[1522]: E1002 19:43:25.910371 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:26.330705 kubelet[1522]: W1002 19:43:26.330641 1522 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3e93f2_a296_476f_867e_01304b1d1131.slice/cri-containerd-c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d.scope WatchSource:0}: task c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d not found: not found Oct 2 19:43:26.911543 kubelet[1522]: E1002 19:43:26.911481 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:27.911536 kubelet[1522]: E1002 19:43:27.911474 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:27.912182 kubelet[1522]: E1002 19:43:27.911700 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:28.912682 kubelet[1522]: E1002 19:43:28.912609 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:29.913355 kubelet[1522]: E1002 19:43:29.913270 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:30.913881 kubelet[1522]: E1002 19:43:30.913801 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:31.914833 kubelet[1522]: E1002 19:43:31.914751 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:32.912296 kubelet[1522]: E1002 19:43:32.912216 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:32.915317 kubelet[1522]: E1002 19:43:32.915245 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:33.916351 kubelet[1522]: E1002 19:43:33.916279 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:34.916984 kubelet[1522]: E1002 19:43:34.916908 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:35.917698 kubelet[1522]: E1002 19:43:35.917644 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:35.922790 kubelet[1522]: E1002 19:43:35.922720 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131)\"" pod="kube-system/cilium-pwkkg" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" Oct 2 19:43:36.917944 kubelet[1522]: E1002 19:43:36.917874 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:37.913407 kubelet[1522]: E1002 19:43:37.913370 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:37.918593 kubelet[1522]: E1002 19:43:37.918552 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:38.919625 kubelet[1522]: E1002 19:43:38.919551 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:39.920146 kubelet[1522]: E1002 19:43:39.920078 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:40.920652 kubelet[1522]: E1002 19:43:40.920579 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:41.921433 kubelet[1522]: E1002 19:43:41.921365 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:42.752514 kubelet[1522]: E1002 19:43:42.752434 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:42.781137 env[1128]: time="2023-10-02T19:43:42.781079524Z" level=info msg="StopPodSandbox for \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\"" Oct 2 19:43:42.781723 env[1128]: time="2023-10-02T19:43:42.781211050Z" level=info msg="TearDown network for sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" successfully" Oct 2 19:43:42.781723 env[1128]: time="2023-10-02T19:43:42.781268429Z" level=info msg="StopPodSandbox for \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" returns successfully" Oct 2 19:43:42.782327 env[1128]: time="2023-10-02T19:43:42.782285443Z" level=info msg="RemovePodSandbox for \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\"" Oct 2 19:43:42.782510 env[1128]: time="2023-10-02T19:43:42.782327539Z" level=info msg="Forcibly stopping sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\"" Oct 2 19:43:42.782510 env[1128]: time="2023-10-02T19:43:42.782425603Z" level=info msg="TearDown network for sandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" successfully" Oct 2 19:43:42.787059 env[1128]: time="2023-10-02T19:43:42.786989538Z" level=info msg="RemovePodSandbox \"1f24309a1dc5c4b5ffad4556c945bbfed62b80e1e83325df58bf5c9636e2012c\" returns successfully" Oct 2 19:43:42.914319 kubelet[1522]: E1002 19:43:42.914263 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:42.921692 kubelet[1522]: E1002 19:43:42.921655 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:43.922676 kubelet[1522]: E1002 19:43:43.922599 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:44.923616 kubelet[1522]: E1002 19:43:44.923527 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:45.924137 kubelet[1522]: E1002 19:43:45.924078 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:46.924360 kubelet[1522]: E1002 19:43:46.924313 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:47.915757 kubelet[1522]: E1002 19:43:47.915697 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:47.925406 kubelet[1522]: E1002 19:43:47.925348 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:48.926580 kubelet[1522]: E1002 19:43:48.926522 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:49.925705 env[1128]: time="2023-10-02T19:43:49.925638639Z" level=info msg="CreateContainer within sandbox \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:43:49.927731 kubelet[1522]: E1002 19:43:49.927692 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:49.944105 env[1128]: time="2023-10-02T19:43:49.943641738Z" level=info msg="CreateContainer within sandbox \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d\"" Oct 2 19:43:49.944890 env[1128]: time="2023-10-02T19:43:49.944826017Z" level=info msg="StartContainer for \"37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d\"" Oct 2 19:43:49.978351 systemd[1]: Started cri-containerd-37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d.scope. Oct 2 19:43:49.995661 systemd[1]: cri-containerd-37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d.scope: Deactivated successfully. Oct 2 19:43:50.001869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d-rootfs.mount: Deactivated successfully. Oct 2 19:43:50.017958 env[1128]: time="2023-10-02T19:43:50.017863536Z" level=info msg="shim disconnected" id=37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d Oct 2 19:43:50.017958 env[1128]: time="2023-10-02T19:43:50.017944628Z" level=warning msg="cleaning up after shim disconnected" id=37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d namespace=k8s.io Oct 2 19:43:50.017958 env[1128]: time="2023-10-02T19:43:50.017960848Z" level=info msg="cleaning up dead shim" Oct 2 19:43:50.032623 env[1128]: time="2023-10-02T19:43:50.032531480Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2385 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:50.033033 env[1128]: time="2023-10-02T19:43:50.032930931Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:43:50.033286 env[1128]: time="2023-10-02T19:43:50.033228490Z" level=error msg="Failed to pipe stdout of container \"37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d\"" error="reading from a closed fifo" Oct 2 19:43:50.033510 env[1128]: time="2023-10-02T19:43:50.033435010Z" level=error msg="Failed to pipe stderr of container \"37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d\"" error="reading from a closed fifo" Oct 2 19:43:50.036378 env[1128]: time="2023-10-02T19:43:50.036287069Z" level=error msg="StartContainer for \"37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:50.036836 kubelet[1522]: E1002 19:43:50.036761 1522 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d" Oct 2 19:43:50.037095 kubelet[1522]: E1002 19:43:50.037054 1522 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:50.037095 kubelet[1522]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:50.037095 kubelet[1522]: rm /hostbin/cilium-mount Oct 2 19:43:50.037095 kubelet[1522]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5pwlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:50.037431 kubelet[1522]: E1002 19:43:50.037141 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pwkkg" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" Oct 2 19:43:50.499880 kubelet[1522]: I1002 19:43:50.499831 1522 scope.go:117] "RemoveContainer" containerID="c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d" Oct 2 19:43:50.500789 kubelet[1522]: I1002 19:43:50.500757 1522 scope.go:117] "RemoveContainer" containerID="c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d" Oct 2 19:43:50.501975 env[1128]: time="2023-10-02T19:43:50.501917520Z" level=info msg="RemoveContainer for \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\"" Oct 2 19:43:50.503083 env[1128]: time="2023-10-02T19:43:50.503040297Z" level=info msg="RemoveContainer for \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\"" Oct 2 19:43:50.503530 env[1128]: time="2023-10-02T19:43:50.503429574Z" level=error msg="RemoveContainer for \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\" failed" error="failed to set removing state for container \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\": container is already in removing state" Oct 2 19:43:50.503823 kubelet[1522]: E1002 19:43:50.503747 1522 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\": container is already in removing state" containerID="c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d" Oct 2 19:43:50.503979 kubelet[1522]: E1002 19:43:50.503822 1522 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d": container is already in removing state; Skipping pod "cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131)" Oct 2 19:43:50.504568 kubelet[1522]: E1002 19:43:50.504542 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131)\"" pod="kube-system/cilium-pwkkg" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" Oct 2 19:43:50.506788 env[1128]: time="2023-10-02T19:43:50.506730444Z" level=info msg="RemoveContainer for \"c9b68a0382ba41b8e57abef44ced2b34e7204a6e04c5a4cff9bf703b6c72cf1d\" returns successfully" Oct 2 19:43:50.929205 kubelet[1522]: E1002 19:43:50.929030 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:51.930000 kubelet[1522]: E1002 19:43:51.929931 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:52.916952 kubelet[1522]: E1002 19:43:52.916898 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:52.931118 kubelet[1522]: E1002 19:43:52.931061 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:53.130188 kubelet[1522]: W1002 19:43:53.130113 1522 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3e93f2_a296_476f_867e_01304b1d1131.slice/cri-containerd-37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d.scope WatchSource:0}: task 37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d not found: not found Oct 2 19:43:53.932185 kubelet[1522]: E1002 19:43:53.932046 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:54.932461 kubelet[1522]: E1002 19:43:54.932403 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:55.932749 kubelet[1522]: E1002 19:43:55.932687 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:56.933701 kubelet[1522]: E1002 19:43:56.933639 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:57.918391 kubelet[1522]: E1002 19:43:57.918346 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:57.934247 kubelet[1522]: E1002 19:43:57.934150 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:58.934864 kubelet[1522]: E1002 19:43:58.934797 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:59.935529 kubelet[1522]: E1002 19:43:59.935449 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:00.936251 kubelet[1522]: E1002 19:44:00.936173 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:01.936752 kubelet[1522]: E1002 19:44:01.936683 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:02.752524 kubelet[1522]: E1002 19:44:02.752453 1522 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:02.919567 kubelet[1522]: E1002 19:44:02.919524 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:02.937194 kubelet[1522]: E1002 19:44:02.937115 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:03.938233 kubelet[1522]: E1002 19:44:03.938164 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:04.939105 kubelet[1522]: E1002 19:44:04.939030 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:05.923233 kubelet[1522]: E1002 19:44:05.923171 1522 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pwkkg_kube-system(ce3e93f2-a296-476f-867e-01304b1d1131)\"" pod="kube-system/cilium-pwkkg" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" Oct 2 19:44:05.942543 kubelet[1522]: E1002 19:44:05.942508 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:06.100468 kubelet[1522]: E1002 19:44:06.100431 1522 configmap.go:199] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Oct 2 19:44:06.100823 kubelet[1522]: E1002 19:44:06.100529 1522 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: secret "cilium-ipsec-keys" not found Oct 2 19:44:06.101029 kubelet[1522]: E1002 19:44:06.100998 1522 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-config-path podName:ce3e93f2-a296-476f-867e-01304b1d1131 nodeName:}" failed. No retries permitted until 2023-10-02 19:44:06.600795976 +0000 UTC m=+264.902755502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-config-path") pod "cilium-pwkkg" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131") : configmap "cilium-config" not found Oct 2 19:44:06.101225 kubelet[1522]: E1002 19:44:06.101119 1522 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-ipsec-secrets podName:ce3e93f2-a296-476f-867e-01304b1d1131 nodeName:}" failed. No retries permitted until 2023-10-02 19:44:06.601100337 +0000 UTC m=+264.903059854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-ipsec-secrets") pod "cilium-pwkkg" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131") : secret "cilium-ipsec-keys" not found Oct 2 19:44:06.127477 env[1128]: time="2023-10-02T19:44:06.127397447Z" level=info msg="StopContainer for \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\" with timeout 30 (s)" Oct 2 19:44:06.128048 env[1128]: time="2023-10-02T19:44:06.128006525Z" level=info msg="Stop container \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\" with signal terminated" Oct 2 19:44:06.149246 systemd[1]: cri-containerd-615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10.scope: Deactivated successfully. Oct 2 19:44:06.148000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:44:06.155537 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:44:06.155680 kernel: audit: type=1334 audit(1696275846.148:759): prog-id=89 op=UNLOAD Oct 2 19:44:06.163000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:44:06.172576 kernel: audit: type=1334 audit(1696275846.163:760): prog-id=92 op=UNLOAD Oct 2 19:44:06.187336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10-rootfs.mount: Deactivated successfully. Oct 2 19:44:06.202740 env[1128]: time="2023-10-02T19:44:06.202672628Z" level=info msg="shim disconnected" id=615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10 Oct 2 19:44:06.202740 env[1128]: time="2023-10-02T19:44:06.202740805Z" level=warning msg="cleaning up after shim disconnected" id=615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10 namespace=k8s.io Oct 2 19:44:06.203066 env[1128]: time="2023-10-02T19:44:06.202755475Z" level=info msg="cleaning up dead shim" Oct 2 19:44:06.214260 env[1128]: time="2023-10-02T19:44:06.214210087Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:44:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2422 runtime=io.containerd.runc.v2\n" Oct 2 19:44:06.216888 env[1128]: time="2023-10-02T19:44:06.216833459Z" level=info msg="StopContainer for \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\" returns successfully" Oct 2 19:44:06.217594 env[1128]: time="2023-10-02T19:44:06.217538166Z" level=info msg="StopPodSandbox for \"169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2\"" Oct 2 19:44:06.217730 env[1128]: time="2023-10-02T19:44:06.217631895Z" level=info msg="Container to stop \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:44:06.219979 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2-shm.mount: Deactivated successfully. Oct 2 19:44:06.230003 systemd[1]: cri-containerd-169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2.scope: Deactivated successfully. Oct 2 19:44:06.228000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:44:06.238582 kernel: audit: type=1334 audit(1696275846.228:761): prog-id=81 op=UNLOAD Oct 2 19:44:06.238000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:44:06.247533 kernel: audit: type=1334 audit(1696275846.238:762): prog-id=84 op=UNLOAD Oct 2 19:44:06.263353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2-rootfs.mount: Deactivated successfully. Oct 2 19:44:06.275319 env[1128]: time="2023-10-02T19:44:06.275252712Z" level=info msg="shim disconnected" id=169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2 Oct 2 19:44:06.275319 env[1128]: time="2023-10-02T19:44:06.275320764Z" level=warning msg="cleaning up after shim disconnected" id=169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2 namespace=k8s.io Oct 2 19:44:06.275706 env[1128]: time="2023-10-02T19:44:06.275335622Z" level=info msg="cleaning up dead shim" Oct 2 19:44:06.287528 env[1128]: time="2023-10-02T19:44:06.287445086Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:44:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2453 runtime=io.containerd.runc.v2\n" Oct 2 19:44:06.287919 env[1128]: time="2023-10-02T19:44:06.287881864Z" level=info msg="TearDown network for sandbox \"169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2\" successfully" Oct 2 19:44:06.288090 env[1128]: time="2023-10-02T19:44:06.287919865Z" level=info msg="StopPodSandbox for \"169af98fe848359f5e945e04a10ceb5d456aafc56833eaab19d9163e71f8d0a2\" returns successfully" Oct 2 19:44:06.401959 kubelet[1522]: I1002 19:44:06.401887 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfpst\" (UniqueName: \"kubernetes.io/projected/7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c-kube-api-access-jfpst\") pod \"7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c\" (UID: \"7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c\") " Oct 2 19:44:06.401959 kubelet[1522]: I1002 19:44:06.401954 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c-cilium-config-path\") pod \"7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c\" (UID: \"7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c\") " Oct 2 19:44:06.406151 kubelet[1522]: I1002 19:44:06.406111 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c" (UID: "7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:44:06.409836 systemd[1]: var-lib-kubelet-pods-7aee739b\x2dcc9c\x2d4d54\x2da10d\x2d3b3dcfb5b81c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djfpst.mount: Deactivated successfully. Oct 2 19:44:06.411425 kubelet[1522]: I1002 19:44:06.411389 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c-kube-api-access-jfpst" (OuterVolumeSpecName: "kube-api-access-jfpst") pod "7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c" (UID: "7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c"). InnerVolumeSpecName "kube-api-access-jfpst". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:44:06.502934 kubelet[1522]: I1002 19:44:06.502771 1522 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jfpst\" (UniqueName: \"kubernetes.io/projected/7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c-kube-api-access-jfpst\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.502934 kubelet[1522]: I1002 19:44:06.502817 1522 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c-cilium-config-path\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.533355 kubelet[1522]: I1002 19:44:06.533323 1522 scope.go:117] "RemoveContainer" containerID="615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10" Oct 2 19:44:06.534354 env[1128]: time="2023-10-02T19:44:06.534306247Z" level=info msg="StopPodSandbox for \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\"" Oct 2 19:44:06.536967 env[1128]: time="2023-10-02T19:44:06.534393382Z" level=info msg="Container to stop \"37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:44:06.536706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191-shm.mount: Deactivated successfully. Oct 2 19:44:06.543021 systemd[1]: Removed slice kubepods-besteffort-pod7aee739b_cc9c_4d54_a10d_3b3dcfb5b81c.slice. Oct 2 19:44:06.545457 env[1128]: time="2023-10-02T19:44:06.545401626Z" level=info msg="RemoveContainer for \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\"" Oct 2 19:44:06.551189 env[1128]: time="2023-10-02T19:44:06.551114917Z" level=info msg="RemoveContainer for \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\" returns successfully" Oct 2 19:44:06.551792 kubelet[1522]: I1002 19:44:06.551743 1522 scope.go:117] "RemoveContainer" containerID="615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10" Oct 2 19:44:06.552390 env[1128]: time="2023-10-02T19:44:06.552265946Z" level=error msg="ContainerStatus for \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\": not found" Oct 2 19:44:06.552661 kubelet[1522]: E1002 19:44:06.552637 1522 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\": not found" containerID="615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10" Oct 2 19:44:06.552783 kubelet[1522]: I1002 19:44:06.552688 1522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10"} err="failed to get container status \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\": rpc error: code = NotFound desc = an error occurred when try to find container \"615e54e7000873e78f4746e3df9e6c6dbfbaedae0753cc6ae86100c6357d2a10\": not found" Oct 2 19:44:06.554000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:44:06.556040 systemd[1]: cri-containerd-c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191.scope: Deactivated successfully. Oct 2 19:44:06.564507 kernel: audit: type=1334 audit(1696275846.554:763): prog-id=85 op=UNLOAD Oct 2 19:44:06.565000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:44:06.574523 kernel: audit: type=1334 audit(1696275846.565:764): prog-id=88 op=UNLOAD Oct 2 19:44:06.603881 kubelet[1522]: E1002 19:44:06.603842 1522 configmap.go:199] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Oct 2 19:44:06.604185 kubelet[1522]: E1002 19:44:06.603926 1522 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-config-path podName:ce3e93f2-a296-476f-867e-01304b1d1131 nodeName:}" failed. No retries permitted until 2023-10-02 19:44:07.603904204 +0000 UTC m=+265.905863729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-config-path") pod "cilium-pwkkg" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131") : configmap "cilium-config" not found Oct 2 19:44:06.604560 kubelet[1522]: E1002 19:44:06.604373 1522 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: secret "cilium-ipsec-keys" not found Oct 2 19:44:06.604560 kubelet[1522]: E1002 19:44:06.604432 1522 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-ipsec-secrets podName:ce3e93f2-a296-476f-867e-01304b1d1131 nodeName:}" failed. No retries permitted until 2023-10-02 19:44:07.60441493 +0000 UTC m=+265.906374443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-ipsec-secrets") pod "cilium-pwkkg" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131") : secret "cilium-ipsec-keys" not found Oct 2 19:44:06.607566 env[1128]: time="2023-10-02T19:44:06.607504037Z" level=info msg="shim disconnected" id=c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191 Oct 2 19:44:06.607711 env[1128]: time="2023-10-02T19:44:06.607572014Z" level=warning msg="cleaning up after shim disconnected" id=c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191 namespace=k8s.io Oct 2 19:44:06.607711 env[1128]: time="2023-10-02T19:44:06.607588436Z" level=info msg="cleaning up dead shim" Oct 2 19:44:06.619580 env[1128]: time="2023-10-02T19:44:06.619465623Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:44:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2486 runtime=io.containerd.runc.v2\n" Oct 2 19:44:06.619970 env[1128]: time="2023-10-02T19:44:06.619930909Z" level=info msg="TearDown network for sandbox \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\" successfully" Oct 2 19:44:06.620093 env[1128]: time="2023-10-02T19:44:06.619968311Z" level=info msg="StopPodSandbox for \"c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191\" returns successfully" Oct 2 19:44:06.704621 kubelet[1522]: I1002 19:44:06.704562 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-ipsec-secrets\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.704946 kubelet[1522]: I1002 19:44:06.704906 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-hostproc\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.705252 kubelet[1522]: I1002 19:44:06.705227 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-clustermesh-secrets\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.705380 kubelet[1522]: I1002 19:44:06.705272 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-host-proc-sys-kernel\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.705380 kubelet[1522]: I1002 19:44:06.705313 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:44:06.705380 kubelet[1522]: I1002 19:44:06.705153 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-hostproc" (OuterVolumeSpecName: "hostproc") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:44:06.709808 kubelet[1522]: I1002 19:44:06.709770 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:44:06.710152 kubelet[1522]: I1002 19:44:06.710120 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.805867 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.805891 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-cgroup\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.805981 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-host-proc-sys-net\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806011 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-run\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806039 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-lib-modules\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806073 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce3e93f2-a296-476f-867e-01304b1d1131-hubble-tls\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806099 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-etc-cni-netd\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806125 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cni-path\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806156 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-xtables-lock\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806192 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pwlf\" (UniqueName: \"kubernetes.io/projected/ce3e93f2-a296-476f-867e-01304b1d1131-kube-api-access-5pwlf\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806234 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-config-path\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806262 1522 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-bpf-maps\") pod \"ce3e93f2-a296-476f-867e-01304b1d1131\" (UID: \"ce3e93f2-a296-476f-867e-01304b1d1131\") " Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806297 1522 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-ipsec-secrets\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806317 1522 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-hostproc\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806335 1522 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce3e93f2-a296-476f-867e-01304b1d1131-clustermesh-secrets\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806356 1522 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-host-proc-sys-kernel\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.807697 kubelet[1522]: I1002 19:44:06.806375 1522 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-cgroup\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.808812 kubelet[1522]: I1002 19:44:06.806403 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:44:06.808812 kubelet[1522]: I1002 19:44:06.806432 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:44:06.808812 kubelet[1522]: I1002 19:44:06.806456 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:44:06.808812 kubelet[1522]: I1002 19:44:06.806509 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:44:06.809149 kubelet[1522]: I1002 19:44:06.809117 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:44:06.809318 kubelet[1522]: I1002 19:44:06.809294 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cni-path" (OuterVolumeSpecName: "cni-path") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:44:06.809997 kubelet[1522]: I1002 19:44:06.809468 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:44:06.813460 kubelet[1522]: I1002 19:44:06.813426 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:44:06.814655 kubelet[1522]: I1002 19:44:06.814619 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3e93f2-a296-476f-867e-01304b1d1131-kube-api-access-5pwlf" (OuterVolumeSpecName: "kube-api-access-5pwlf") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "kube-api-access-5pwlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:44:06.815296 kubelet[1522]: I1002 19:44:06.815232 1522 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3e93f2-a296-476f-867e-01304b1d1131-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ce3e93f2-a296-476f-867e-01304b1d1131" (UID: "ce3e93f2-a296-476f-867e-01304b1d1131"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:44:06.907500 kubelet[1522]: I1002 19:44:06.907435 1522 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-etc-cni-netd\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.907500 kubelet[1522]: I1002 19:44:06.907474 1522 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cni-path\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.907500 kubelet[1522]: I1002 19:44:06.907507 1522 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-xtables-lock\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.907789 kubelet[1522]: I1002 19:44:06.907522 1522 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce3e93f2-a296-476f-867e-01304b1d1131-hubble-tls\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.907789 kubelet[1522]: I1002 19:44:06.907538 1522 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5pwlf\" (UniqueName: \"kubernetes.io/projected/ce3e93f2-a296-476f-867e-01304b1d1131-kube-api-access-5pwlf\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.907789 kubelet[1522]: I1002 19:44:06.907560 1522 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-config-path\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.907789 kubelet[1522]: I1002 19:44:06.907573 1522 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-bpf-maps\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.907789 kubelet[1522]: I1002 19:44:06.907588 1522 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-host-proc-sys-net\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.907789 kubelet[1522]: I1002 19:44:06.907603 1522 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-cilium-run\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.907789 kubelet[1522]: I1002 19:44:06.907617 1522 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce3e93f2-a296-476f-867e-01304b1d1131-lib-modules\") on node \"10.128.0.92\" DevicePath \"\"" Oct 2 19:44:06.924941 kubelet[1522]: I1002 19:44:06.924906 1522 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c" path="/var/lib/kubelet/pods/7aee739b-cc9c-4d54-a10d-3b3dcfb5b81c/volumes" Oct 2 19:44:06.930694 systemd[1]: Removed slice kubepods-burstable-podce3e93f2_a296_476f_867e_01304b1d1131.slice. Oct 2 19:44:06.943973 kubelet[1522]: E1002 19:44:06.943936 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:07.187175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c39835642191a21a632bbcd4f6c9ca9d93e4b483265711edce12ea0c17945191-rootfs.mount: Deactivated successfully. Oct 2 19:44:07.187341 systemd[1]: var-lib-kubelet-pods-ce3e93f2\x2da296\x2d476f\x2d867e\x2d01304b1d1131-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5pwlf.mount: Deactivated successfully. Oct 2 19:44:07.187447 systemd[1]: var-lib-kubelet-pods-ce3e93f2\x2da296\x2d476f\x2d867e\x2d01304b1d1131-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:44:07.187572 systemd[1]: var-lib-kubelet-pods-ce3e93f2\x2da296\x2d476f\x2d867e\x2d01304b1d1131-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:44:07.187671 systemd[1]: var-lib-kubelet-pods-ce3e93f2\x2da296\x2d476f\x2d867e\x2d01304b1d1131-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:44:07.537841 kubelet[1522]: I1002 19:44:07.537348 1522 scope.go:117] "RemoveContainer" containerID="37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d" Oct 2 19:44:07.541071 env[1128]: time="2023-10-02T19:44:07.541016036Z" level=info msg="RemoveContainer for \"37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d\"" Oct 2 19:44:07.545505 env[1128]: time="2023-10-02T19:44:07.545433059Z" level=info msg="RemoveContainer for \"37f3f1d4a2d364a4b7d28bfed3bc7a5f74f7fb35de265684d20eeaaad143325d\" returns successfully" Oct 2 19:44:07.920524 kubelet[1522]: E1002 19:44:07.920357 1522 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:07.944925 kubelet[1522]: E1002 19:44:07.944839 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:08.924652 kubelet[1522]: I1002 19:44:08.924592 1522 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ce3e93f2-a296-476f-867e-01304b1d1131" path="/var/lib/kubelet/pods/ce3e93f2-a296-476f-867e-01304b1d1131/volumes" Oct 2 19:44:08.945530 kubelet[1522]: E1002 19:44:08.945453 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:09.945812 kubelet[1522]: E1002 19:44:09.945747 1522 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"