Oct 2 19:17:10.050709 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:17:10.050740 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:17:10.050753 kernel: BIOS-provided physical RAM map: Oct 2 19:17:10.050762 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 2 19:17:10.050771 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Oct 2 19:17:10.050780 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Oct 2 19:17:10.050795 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Oct 2 19:17:10.050805 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Oct 2 19:17:10.050816 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Oct 2 19:17:10.050826 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Oct 2 19:17:10.050837 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Oct 2 19:17:10.050847 kernel: printk: bootconsole [earlyser0] enabled Oct 2 19:17:10.050857 kernel: NX (Execute Disable) protection: active Oct 2 19:17:10.050868 kernel: efi: EFI v2.70 by Microsoft Oct 2 19:17:10.050884 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5caa98 RNG=0x3ffd1018 Oct 2 19:17:10.050895 kernel: random: crng init done Oct 2 19:17:10.050939 kernel: SMBIOS 3.1.0 present. Oct 2 19:17:10.050951 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 05/09/2022 Oct 2 19:17:10.050963 kernel: Hypervisor detected: Microsoft Hyper-V Oct 2 19:17:10.050974 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Oct 2 19:17:10.050985 kernel: Hyper-V Host Build:20348-10.0-1-0.1462 Oct 2 19:17:10.050996 kernel: Hyper-V: Nested features: 0x1e0101 Oct 2 19:17:10.051010 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Oct 2 19:17:10.051021 kernel: Hyper-V: Using hypercall for remote TLB flush Oct 2 19:17:10.051033 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 2 19:17:10.051044 kernel: tsc: Marking TSC unstable due to running on Hyper-V Oct 2 19:17:10.051056 kernel: tsc: Detected 2593.905 MHz processor Oct 2 19:17:10.051067 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:17:10.051078 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:17:10.051090 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Oct 2 19:17:10.051101 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:17:10.051113 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Oct 2 19:17:10.051126 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Oct 2 19:17:10.051137 kernel: Using GB pages for direct mapping Oct 2 19:17:10.051147 kernel: Secure boot disabled Oct 2 19:17:10.051156 kernel: ACPI: Early table checksum verification disabled Oct 2 19:17:10.051168 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Oct 2 19:17:10.051179 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:17:10.051191 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:17:10.051203 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Oct 2 19:17:10.051222 kernel: ACPI: FACS 0x000000003FFFE000 000040 Oct 2 19:17:10.051234 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:17:10.051246 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:17:10.051259 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:17:10.051271 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:17:10.051283 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:17:10.051298 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:17:10.051310 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:17:10.051323 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Oct 2 19:17:10.051335 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Oct 2 19:17:10.051347 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Oct 2 19:17:10.051359 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Oct 2 19:17:10.051372 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Oct 2 19:17:10.051384 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Oct 2 19:17:10.051399 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Oct 2 19:17:10.051411 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Oct 2 19:17:10.051423 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Oct 2 19:17:10.051436 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Oct 2 19:17:10.051448 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 2 19:17:10.051460 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 2 19:17:10.051472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Oct 2 19:17:10.051484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Oct 2 19:17:10.051497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Oct 2 19:17:10.051511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Oct 2 19:17:10.051523 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Oct 2 19:17:10.051536 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Oct 2 19:17:10.051548 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Oct 2 19:17:10.051560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Oct 2 19:17:10.051572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Oct 2 19:17:10.051585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Oct 2 19:17:10.051597 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Oct 2 19:17:10.051609 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Oct 2 19:17:10.051624 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Oct 2 19:17:10.051636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Oct 2 19:17:10.051648 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Oct 2 19:17:10.051661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Oct 2 19:17:10.051674 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Oct 2 19:17:10.051686 kernel: NODE_DATA(0) allocated [mem 0x2bfff9000-0x2bfffefff] Oct 2 19:17:10.051698 kernel: Zone ranges: Oct 2 19:17:10.051710 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:17:10.051723 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 2 19:17:10.051737 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Oct 2 19:17:10.051749 kernel: Movable zone start for each node Oct 2 19:17:10.051762 kernel: Early memory node ranges Oct 2 19:17:10.051774 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 2 19:17:10.051787 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Oct 2 19:17:10.051799 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Oct 2 19:17:10.051811 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Oct 2 19:17:10.051824 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Oct 2 19:17:10.051836 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:17:10.051850 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 2 19:17:10.051863 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Oct 2 19:17:10.051875 kernel: ACPI: PM-Timer IO Port: 0x408 Oct 2 19:17:10.051887 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Oct 2 19:17:10.051899 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:17:10.051919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:17:10.051931 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:17:10.051943 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Oct 2 19:17:10.051956 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 2 19:17:10.051970 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Oct 2 19:17:10.051982 kernel: Booting paravirtualized kernel on Hyper-V Oct 2 19:17:10.051995 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:17:10.052008 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Oct 2 19:17:10.052020 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Oct 2 19:17:10.052032 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Oct 2 19:17:10.052044 kernel: pcpu-alloc: [0] 0 1 Oct 2 19:17:10.052056 kernel: Hyper-V: PV spinlocks enabled Oct 2 19:17:10.052068 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:17:10.052083 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Oct 2 19:17:10.052095 kernel: Policy zone: Normal Oct 2 19:17:10.052109 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:17:10.052121 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:17:10.052134 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 2 19:17:10.052146 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:17:10.052158 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:17:10.052171 kernel: Memory: 8073732K/8387460K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 313468K reserved, 0K cma-reserved) Oct 2 19:17:10.052185 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:17:10.052197 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:17:10.052218 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:17:10.052233 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:17:10.052247 kernel: rcu: RCU event tracing is enabled. Oct 2 19:17:10.052260 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:17:10.052276 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:17:10.052298 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:17:10.052319 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:17:10.052331 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:17:10.052343 kernel: Using NULL legacy PIC Oct 2 19:17:10.052359 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Oct 2 19:17:10.052372 kernel: Console: colour dummy device 80x25 Oct 2 19:17:10.052385 kernel: printk: console [tty1] enabled Oct 2 19:17:10.052396 kernel: printk: console [ttyS0] enabled Oct 2 19:17:10.052406 kernel: printk: bootconsole [earlyser0] disabled Oct 2 19:17:10.052421 kernel: ACPI: Core revision 20210730 Oct 2 19:17:10.052434 kernel: Failed to register legacy timer interrupt Oct 2 19:17:10.052446 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:17:10.052457 kernel: Hyper-V: Using IPI hypercalls Oct 2 19:17:10.052470 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Oct 2 19:17:10.052482 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 2 19:17:10.052493 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 2 19:17:10.052511 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:17:10.052523 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:17:10.052534 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:17:10.052548 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:17:10.052560 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 2 19:17:10.052572 kernel: RETBleed: Vulnerable Oct 2 19:17:10.052583 kernel: Speculative Store Bypass: Vulnerable Oct 2 19:17:10.052594 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:17:10.052605 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:17:10.052617 kernel: GDS: Unknown: Dependent on hypervisor status Oct 2 19:17:10.052629 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:17:10.052640 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:17:10.052654 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:17:10.052667 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 2 19:17:10.052680 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 2 19:17:10.052692 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 2 19:17:10.052706 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:17:10.052719 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Oct 2 19:17:10.052732 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Oct 2 19:17:10.052746 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Oct 2 19:17:10.052759 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Oct 2 19:17:10.052772 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:17:10.052785 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:17:10.052798 kernel: LSM: Security Framework initializing Oct 2 19:17:10.052811 kernel: SELinux: Initializing. Oct 2 19:17:10.052826 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:17:10.052839 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:17:10.052853 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Oct 2 19:17:10.052866 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 2 19:17:10.052880 kernel: signal: max sigframe size: 3632 Oct 2 19:17:10.052893 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:17:10.052921 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 2 19:17:10.052934 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:17:10.052948 kernel: x86: Booting SMP configuration: Oct 2 19:17:10.052961 kernel: .... node #0, CPUs: #1 Oct 2 19:17:10.052978 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Oct 2 19:17:10.052992 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 2 19:17:10.053005 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:17:10.053018 kernel: smpboot: Max logical packages: 1 Oct 2 19:17:10.053032 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Oct 2 19:17:10.053045 kernel: devtmpfs: initialized Oct 2 19:17:10.053059 kernel: x86/mm: Memory block size: 128MB Oct 2 19:17:10.053072 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Oct 2 19:17:10.053088 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:17:10.053101 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:17:10.053115 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:17:10.053128 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:17:10.053141 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:17:10.053155 kernel: audit: type=2000 audit(1696274229.024:1): state=initialized audit_enabled=0 res=1 Oct 2 19:17:10.053168 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:17:10.053181 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:17:10.053193 kernel: cpuidle: using governor menu Oct 2 19:17:10.053208 kernel: ACPI: bus type PCI registered Oct 2 19:17:10.053221 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:17:10.053234 kernel: dca service started, version 1.12.1 Oct 2 19:17:10.053247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:17:10.053260 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:17:10.053274 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:17:10.053286 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:17:10.053299 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:17:10.053312 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:17:10.053327 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:17:10.053340 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:17:10.053352 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:17:10.053364 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:17:10.053377 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:17:10.053389 kernel: ACPI: Interpreter enabled Oct 2 19:17:10.053401 kernel: ACPI: PM: (supports S0 S5) Oct 2 19:17:10.053413 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:17:10.053425 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:17:10.053440 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Oct 2 19:17:10.053452 kernel: iommu: Default domain type: Translated Oct 2 19:17:10.053464 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:17:10.053477 kernel: vgaarb: loaded Oct 2 19:17:10.053489 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:17:10.053501 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Oct 2 19:17:10.053513 kernel: PTP clock support registered Oct 2 19:17:10.053525 kernel: Registered efivars operations Oct 2 19:17:10.053537 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:17:10.053550 kernel: PCI: System does not support PCI Oct 2 19:17:10.053564 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Oct 2 19:17:10.053576 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:17:10.053589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:17:10.053601 kernel: pnp: PnP ACPI init Oct 2 19:17:10.053613 kernel: pnp: PnP ACPI: found 3 devices Oct 2 19:17:10.053625 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:17:10.053638 kernel: NET: Registered PF_INET protocol family Oct 2 19:17:10.053650 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 19:17:10.053665 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 2 19:17:10.053678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:17:10.053690 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:17:10.053702 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Oct 2 19:17:10.053715 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 2 19:17:10.053727 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 2 19:17:10.053739 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 2 19:17:10.053752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:17:10.053764 kernel: NET: Registered PF_XDP protocol family Oct 2 19:17:10.053778 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:17:10.053791 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 2 19:17:10.053803 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Oct 2 19:17:10.053815 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 2 19:17:10.053828 kernel: Initialise system trusted keyrings Oct 2 19:17:10.053840 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 2 19:17:10.053852 kernel: Key type asymmetric registered Oct 2 19:17:10.053864 kernel: Asymmetric key parser 'x509' registered Oct 2 19:17:10.053876 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:17:10.053891 kernel: io scheduler mq-deadline registered Oct 2 19:17:10.053915 kernel: io scheduler kyber registered Oct 2 19:17:10.053928 kernel: io scheduler bfq registered Oct 2 19:17:10.053940 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:17:10.053953 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:17:10.053965 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:17:10.053978 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 2 19:17:10.053990 kernel: i8042: PNP: No PS/2 controller found. Oct 2 19:17:10.054179 kernel: rtc_cmos 00:02: registered as rtc0 Oct 2 19:17:10.054311 kernel: rtc_cmos 00:02: setting system clock to 2023-10-02T19:17:09 UTC (1696274229) Oct 2 19:17:10.054419 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Oct 2 19:17:10.054436 kernel: fail to initialize ptp_kvm Oct 2 19:17:10.054449 kernel: intel_pstate: CPU model not supported Oct 2 19:17:10.054462 kernel: efifb: probing for efifb Oct 2 19:17:10.054475 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 2 19:17:10.054489 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 2 19:17:10.054501 kernel: efifb: scrolling: redraw Oct 2 19:17:10.054518 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 2 19:17:10.054529 kernel: Console: switching to colour frame buffer device 128x48 Oct 2 19:17:10.054540 kernel: fb0: EFI VGA frame buffer device Oct 2 19:17:10.054551 kernel: pstore: Registered efi as persistent store backend Oct 2 19:17:10.054564 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:17:10.054576 kernel: Segment Routing with IPv6 Oct 2 19:17:10.054589 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:17:10.054601 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:17:10.054613 kernel: Key type dns_resolver registered Oct 2 19:17:10.054630 kernel: IPI shorthand broadcast: enabled Oct 2 19:17:10.054643 kernel: sched_clock: Marking stable (755328000, 25355600)->(980433600, -199750000) Oct 2 19:17:10.054654 kernel: registered taskstats version 1 Oct 2 19:17:10.054666 kernel: Loading compiled-in X.509 certificates Oct 2 19:17:10.054677 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:17:10.054692 kernel: Key type .fscrypt registered Oct 2 19:17:10.054710 kernel: Key type fscrypt-provisioning registered Oct 2 19:17:10.054722 kernel: pstore: Using crash dump compression: deflate Oct 2 19:17:10.054737 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:17:10.054748 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:17:10.054759 kernel: ima: No architecture policies found Oct 2 19:17:10.054771 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:17:10.054782 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:17:10.054794 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:17:10.054806 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:17:10.054818 kernel: Run /init as init process Oct 2 19:17:10.054832 kernel: with arguments: Oct 2 19:17:10.054846 kernel: /init Oct 2 19:17:10.054862 kernel: with environment: Oct 2 19:17:10.054873 kernel: HOME=/ Oct 2 19:17:10.054884 kernel: TERM=linux Oct 2 19:17:10.054895 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:17:10.054930 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:17:10.054946 systemd[1]: Detected virtualization microsoft. Oct 2 19:17:10.054960 systemd[1]: Detected architecture x86-64. Oct 2 19:17:10.054976 systemd[1]: Running in initrd. Oct 2 19:17:10.054987 systemd[1]: No hostname configured, using default hostname. Oct 2 19:17:10.054999 systemd[1]: Hostname set to <localhost>. Oct 2 19:17:10.055013 systemd[1]: Initializing machine ID from random generator. Oct 2 19:17:10.055027 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:17:10.055040 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:17:10.055052 systemd[1]: Reached target cryptsetup.target. Oct 2 19:17:10.055065 systemd[1]: Reached target paths.target. Oct 2 19:17:10.055078 systemd[1]: Reached target slices.target. Oct 2 19:17:10.055094 systemd[1]: Reached target swap.target. Oct 2 19:17:10.055106 systemd[1]: Reached target timers.target. Oct 2 19:17:10.055120 systemd[1]: Listening on iscsid.socket. Oct 2 19:17:10.055132 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:17:10.055145 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:17:10.055160 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:17:10.055175 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:17:10.055193 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:17:10.055209 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:17:10.055223 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:17:10.055238 systemd[1]: Reached target sockets.target. Oct 2 19:17:10.055252 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:17:10.055267 systemd[1]: Finished network-cleanup.service. Oct 2 19:17:10.055281 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:17:10.055295 systemd[1]: Starting systemd-journald.service... Oct 2 19:17:10.055309 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:17:10.055327 systemd[1]: Starting systemd-resolved.service... Oct 2 19:17:10.055342 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:17:10.055362 systemd-journald[183]: Journal started Oct 2 19:17:10.055436 systemd-journald[183]: Runtime Journal (/run/log/journal/3ce7b81ff85746cea4b02e00ee50a614) is 8.0M, max 159.0M, 151.0M free. Oct 2 19:17:10.052538 systemd-modules-load[184]: Inserted module 'overlay' Oct 2 19:17:10.068922 systemd[1]: Started systemd-journald.service. Oct 2 19:17:10.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.075674 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:17:10.095408 kernel: audit: type=1130 audit(1696274230.074:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.091461 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:17:10.095592 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:17:10.101558 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:17:10.112626 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:17:10.113390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:17:10.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.132738 kernel: audit: type=1130 audit(1696274230.090:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.138116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:17:10.151456 kernel: audit: type=1130 audit(1696274230.094:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.151495 kernel: Bridge firewalling registered Oct 2 19:17:10.153606 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:17:10.171337 kernel: audit: type=1130 audit(1696274230.098:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.170633 systemd-resolved[185]: Positive Trust Anchors: Oct 2 19:17:10.170642 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:17:10.170675 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:17:10.173427 systemd-resolved[185]: Defaulting to hostname 'linux'. Oct 2 19:17:10.177051 systemd-modules-load[184]: Inserted module 'br_netfilter' Oct 2 19:17:10.177363 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:17:10.193550 systemd[1]: Started systemd-resolved.service. Oct 2 19:17:10.210190 dracut-cmdline[201]: dracut-dracut-053 Oct 2 19:17:10.210190 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:17:10.196017 systemd[1]: Reached target nss-lookup.target. Oct 2 19:17:10.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.240962 kernel: audit: type=1130 audit(1696274230.145:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.262855 kernel: audit: type=1130 audit(1696274230.167:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.262924 kernel: audit: type=1130 audit(1696274230.194:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.266923 kernel: SCSI subsystem initialized Oct 2 19:17:10.293581 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:17:10.293649 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:17:10.299929 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:17:10.303337 systemd-modules-load[184]: Inserted module 'dm_multipath' Oct 2 19:17:10.306700 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:17:10.328340 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:17:10.328371 kernel: audit: type=1130 audit(1696274230.313:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.326745 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:17:10.341294 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:17:10.356936 kernel: iscsi: registered transport (tcp) Oct 2 19:17:10.356962 kernel: audit: type=1130 audit(1696274230.346:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.382596 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:17:10.382675 kernel: QLogic iSCSI HBA Driver Oct 2 19:17:10.412815 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:17:10.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.417782 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:17:10.468931 kernel: raid6: avx512x4 gen() 18123 MB/s Oct 2 19:17:10.488923 kernel: raid6: avx512x4 xor() 8006 MB/s Oct 2 19:17:10.508916 kernel: raid6: avx512x2 gen() 18344 MB/s Oct 2 19:17:10.528924 kernel: raid6: avx512x2 xor() 28555 MB/s Oct 2 19:17:10.548916 kernel: raid6: avx512x1 gen() 18215 MB/s Oct 2 19:17:10.569916 kernel: raid6: avx512x1 xor() 26436 MB/s Oct 2 19:17:10.589918 kernel: raid6: avx2x4 gen() 18140 MB/s Oct 2 19:17:10.609919 kernel: raid6: avx2x4 xor() 7505 MB/s Oct 2 19:17:10.629916 kernel: raid6: avx2x2 gen() 18183 MB/s Oct 2 19:17:10.650920 kernel: raid6: avx2x2 xor() 21385 MB/s Oct 2 19:17:10.670914 kernel: raid6: avx2x1 gen() 13497 MB/s Oct 2 19:17:10.690915 kernel: raid6: avx2x1 xor() 19018 MB/s Oct 2 19:17:10.711920 kernel: raid6: sse2x4 gen() 11562 MB/s Oct 2 19:17:10.731915 kernel: raid6: sse2x4 xor() 7241 MB/s Oct 2 19:17:10.751921 kernel: raid6: sse2x2 gen() 12331 MB/s Oct 2 19:17:10.772923 kernel: raid6: sse2x2 xor() 7152 MB/s Oct 2 19:17:10.791918 kernel: raid6: sse2x1 gen() 11169 MB/s Oct 2 19:17:10.815486 kernel: raid6: sse2x1 xor() 5825 MB/s Oct 2 19:17:10.815508 kernel: raid6: using algorithm avx512x2 gen() 18344 MB/s Oct 2 19:17:10.815522 kernel: raid6: .... xor() 28555 MB/s, rmw enabled Oct 2 19:17:10.819627 kernel: raid6: using avx512x2 recovery algorithm Oct 2 19:17:10.838935 kernel: xor: automatically using best checksumming function avx Oct 2 19:17:10.933930 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:17:10.941714 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:17:10.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.945000 audit: BPF prog-id=7 op=LOAD Oct 2 19:17:10.945000 audit: BPF prog-id=8 op=LOAD Oct 2 19:17:10.947139 systemd[1]: Starting systemd-udevd.service... Oct 2 19:17:10.962783 systemd-udevd[385]: Using default interface naming scheme 'v252'. Oct 2 19:17:10.969619 systemd[1]: Started systemd-udevd.service. Oct 2 19:17:10.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:10.975606 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:17:10.996307 dracut-pre-trigger[389]: rd.md=0: removing MD RAID activation Oct 2 19:17:11.025483 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:17:11.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:11.028659 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:17:11.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:11.066972 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:17:11.117930 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:17:11.143926 kernel: hv_vmbus: Vmbus version:5.2 Oct 2 19:17:11.149934 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:17:11.165925 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 2 19:17:11.170925 kernel: hv_vmbus: registering driver hv_storvsc Oct 2 19:17:11.175867 kernel: scsi host1: storvsc_host_t Oct 2 19:17:11.176077 kernel: scsi host0: storvsc_host_t Oct 2 19:17:11.190484 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Oct 2 19:17:11.190546 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Oct 2 19:17:11.195936 kernel: AES CTR mode by8 optimization enabled Oct 2 19:17:11.203886 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Oct 2 19:17:11.209253 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:17:11.219943 kernel: hv_vmbus: registering driver hid_hyperv Oct 2 19:17:11.219998 kernel: hv_vmbus: registering driver hv_netvsc Oct 2 19:17:11.233819 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Oct 2 19:17:11.233881 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 2 19:17:11.257440 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Oct 2 19:17:11.257712 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:17:11.262214 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Oct 2 19:17:11.262414 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Oct 2 19:17:11.268925 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 2 19:17:11.269145 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Oct 2 19:17:11.269270 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Oct 2 19:17:11.274926 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Oct 2 19:17:11.279926 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:17:11.284929 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 2 19:17:11.441271 kernel: hv_netvsc 000d3ad7-d158-000d-3ad7-d158000d3ad7 eth0: VF slot 1 added Oct 2 19:17:11.450931 kernel: hv_vmbus: registering driver hv_pci Oct 2 19:17:11.457928 kernel: hv_pci 87b8be56-eab2-4069-bbe1-df3032cf5c98: PCI VMBus probing: Using version 0x10004 Oct 2 19:17:11.468913 kernel: hv_pci 87b8be56-eab2-4069-bbe1-df3032cf5c98: PCI host bridge to bus eab2:00 Oct 2 19:17:11.469116 kernel: pci_bus eab2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Oct 2 19:17:11.469242 kernel: pci_bus eab2:00: No busn resource found for root bus, will use [bus 00-ff] Oct 2 19:17:11.479092 kernel: pci eab2:00:02.0: [15b3:1016] type 00 class 0x020000 Oct 2 19:17:11.488456 kernel: pci eab2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Oct 2 19:17:11.505397 kernel: pci eab2:00:02.0: enabling Extended Tags Oct 2 19:17:11.523120 kernel: pci eab2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at eab2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Oct 2 19:17:11.531694 kernel: pci_bus eab2:00: busn_res: [bus 00-ff] end is updated to 00 Oct 2 19:17:11.531918 kernel: pci eab2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Oct 2 19:17:11.636933 kernel: mlx5_core eab2:00:02.0: firmware version: 14.30.1224 Oct 2 19:17:11.758087 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:17:11.785633 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (437) Oct 2 19:17:11.801521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:17:11.806939 kernel: mlx5_core eab2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Oct 2 19:17:11.949167 kernel: mlx5_core eab2:00:02.0: Supported tc offload range - chains: 1, prios: 1 Oct 2 19:17:11.949453 kernel: mlx5_core eab2:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Oct 2 19:17:11.956288 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:17:11.972501 kernel: hv_netvsc 000d3ad7-d158-000d-3ad7-d158000d3ad7 eth0: VF registering: eth1 Oct 2 19:17:11.972681 kernel: mlx5_core eab2:00:02.0 eth1: joined to eth0 Oct 2 19:17:11.983954 kernel: mlx5_core eab2:00:02.0 enP60082s1: renamed from eth1 Oct 2 19:17:12.658945 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:17:12.662162 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:17:12.669500 systemd[1]: Starting disk-uuid.service... Oct 2 19:17:12.682930 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:17:12.690926 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:17:13.697930 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:17:13.698168 disk-uuid[549]: The operation has completed successfully. Oct 2 19:17:13.770710 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:17:13.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:13.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:13.770815 systemd[1]: Finished disk-uuid.service. Oct 2 19:17:13.782074 systemd[1]: Starting verity-setup.service... Oct 2 19:17:13.819930 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:17:14.066259 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:17:14.070469 systemd[1]: Finished verity-setup.service. Oct 2 19:17:14.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.074975 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:17:14.151932 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:17:14.152355 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:17:14.154439 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:17:14.155236 systemd[1]: Starting ignition-setup.service... Oct 2 19:17:14.166127 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:17:14.188022 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:17:14.188097 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:17:14.188116 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:17:14.236532 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:17:14.246297 kernel: kauditd_printk_skb: 10 callbacks suppressed Oct 2 19:17:14.246321 kernel: audit: type=1130 audit(1696274234.239:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.258000 audit: BPF prog-id=9 op=LOAD Oct 2 19:17:14.258869 systemd[1]: Starting systemd-networkd.service... Oct 2 19:17:14.264873 kernel: audit: type=1334 audit(1696274234.258:22): prog-id=9 op=LOAD Oct 2 19:17:14.272738 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:17:14.291163 systemd-networkd[790]: lo: Link UP Oct 2 19:17:14.291173 systemd-networkd[790]: lo: Gained carrier Oct 2 19:17:14.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.292097 systemd-networkd[790]: Enumeration completed Oct 2 19:17:14.311688 kernel: audit: type=1130 audit(1696274234.295:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.292189 systemd[1]: Started systemd-networkd.service. Oct 2 19:17:14.295366 systemd[1]: Reached target network.target. Oct 2 19:17:14.306944 systemd-networkd[790]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:17:14.318942 systemd[1]: Starting iscsiuio.service... Oct 2 19:17:14.324929 systemd[1]: Started iscsiuio.service. Oct 2 19:17:14.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.328414 systemd[1]: Starting iscsid.service... Oct 2 19:17:14.343733 kernel: audit: type=1130 audit(1696274234.326:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.344994 iscsid[799]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:17:14.344994 iscsid[799]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Oct 2 19:17:14.344994 iscsid[799]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:17:14.344994 iscsid[799]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:17:14.368724 iscsid[799]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:17:14.368724 iscsid[799]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:17:14.376020 systemd[1]: Started iscsid.service. Oct 2 19:17:14.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.389123 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:17:14.394736 kernel: audit: type=1130 audit(1696274234.377:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.401938 kernel: mlx5_core eab2:00:02.0 enP60082s1: Link up Oct 2 19:17:14.404734 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:17:14.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.408836 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:17:14.424360 kernel: audit: type=1130 audit(1696274234.408:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.424384 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:17:14.428709 systemd[1]: Reached target remote-fs.target. Oct 2 19:17:14.433834 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:17:14.438023 systemd[1]: Finished ignition-setup.service. Oct 2 19:17:14.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.443537 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:17:14.459703 kernel: audit: type=1130 audit(1696274234.442:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.459983 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:17:14.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.480849 kernel: audit: type=1130 audit(1696274234.459:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:14.480886 kernel: hv_netvsc 000d3ad7-d158-000d-3ad7-d158000d3ad7 eth0: Data path switched to VF: enP60082s1 Oct 2 19:17:14.481827 systemd-networkd[790]: enP60082s1: Link UP Oct 2 19:17:14.487955 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:17:14.481983 systemd-networkd[790]: eth0: Link UP Oct 2 19:17:14.489831 systemd-networkd[790]: eth0: Gained carrier Oct 2 19:17:14.495439 systemd-networkd[790]: enP60082s1: Gained carrier Oct 2 19:17:14.544001 systemd-networkd[790]: eth0: DHCPv4 address 10.200.8.48/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 2 19:17:16.455162 systemd-networkd[790]: eth0: Gained IPv6LL Oct 2 19:17:17.627195 ignition[811]: Ignition 2.14.0 Oct 2 19:17:17.627212 ignition[811]: Stage: fetch-offline Oct 2 19:17:17.627303 ignition[811]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:17:17.627353 ignition[811]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:17:17.733628 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:17:17.733853 ignition[811]: parsed url from cmdline: "" Oct 2 19:17:17.733859 ignition[811]: no config URL provided Oct 2 19:17:17.733868 ignition[811]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:17:17.733878 ignition[811]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:17:17.733885 ignition[811]: failed to fetch config: resource requires networking Oct 2 19:17:17.736733 ignition[811]: Ignition finished successfully Oct 2 19:17:17.750851 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:17:17.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:17.756257 systemd[1]: Starting ignition-fetch.service... Oct 2 19:17:17.776706 kernel: audit: type=1130 audit(1696274237.755:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:17.764460 ignition[821]: Ignition 2.14.0 Oct 2 19:17:17.764466 ignition[821]: Stage: fetch Oct 2 19:17:17.764584 ignition[821]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:17:17.764611 ignition[821]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:17:17.767700 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:17:17.768482 ignition[821]: parsed url from cmdline: "" Oct 2 19:17:17.768487 ignition[821]: no config URL provided Oct 2 19:17:17.768493 ignition[821]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:17:17.768505 ignition[821]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:17:17.768622 ignition[821]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 2 19:17:17.795112 ignition[821]: GET result: OK Oct 2 19:17:17.796566 ignition[821]: config has been read from IMDS userdata Oct 2 19:17:17.796614 ignition[821]: parsing config with SHA512: e37b5d1965a60b78fd542df3164370e66c3f7a133637f8f68fc82434867f6758ffc2e50fc61419f60c47bfb157187de01bd2b44eadc84d47e4d30a6231fbe2cb Oct 2 19:17:17.811174 unknown[821]: fetched base config from "system" Oct 2 19:17:17.811189 unknown[821]: fetched base config from "system" Oct 2 19:17:17.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:17.811694 ignition[821]: fetch: fetch complete Oct 2 19:17:17.834488 kernel: audit: type=1130 audit(1696274237.814:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:17.811196 unknown[821]: fetched user config from "azure" Oct 2 19:17:17.811699 ignition[821]: fetch: fetch passed Oct 2 19:17:17.813152 systemd[1]: Finished ignition-fetch.service. Oct 2 19:17:17.811738 ignition[821]: Ignition finished successfully Oct 2 19:17:17.816132 systemd[1]: Starting ignition-kargs.service... Oct 2 19:17:17.850051 ignition[827]: Ignition 2.14.0 Oct 2 19:17:17.850061 ignition[827]: Stage: kargs Oct 2 19:17:17.850210 ignition[827]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:17:17.850245 ignition[827]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:17:17.860403 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:17:17.861439 ignition[827]: kargs: kargs passed Oct 2 19:17:17.864764 systemd[1]: Finished ignition-kargs.service. Oct 2 19:17:17.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:17.861478 ignition[827]: Ignition finished successfully Oct 2 19:17:17.869419 systemd[1]: Starting ignition-disks.service... Oct 2 19:17:17.882324 ignition[833]: Ignition 2.14.0 Oct 2 19:17:17.882334 ignition[833]: Stage: disks Oct 2 19:17:17.882477 ignition[833]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:17:17.882502 ignition[833]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:17:17.885078 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:17:17.891105 ignition[833]: disks: disks passed Oct 2 19:17:17.891164 ignition[833]: Ignition finished successfully Oct 2 19:17:17.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:17.892155 systemd[1]: Finished ignition-disks.service. Oct 2 19:17:17.894795 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:17:17.898572 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:17:17.900493 systemd[1]: Reached target local-fs.target. Oct 2 19:17:17.902525 systemd[1]: Reached target sysinit.target. Oct 2 19:17:17.906198 systemd[1]: Reached target basic.target. Oct 2 19:17:17.908959 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:17:17.965796 systemd-fsck[841]: ROOT: clean, 603/7326000 files, 481068/7359488 blocks Oct 2 19:17:17.979106 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:17:17.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:17.987784 systemd[1]: Mounting sysroot.mount... Oct 2 19:17:18.002927 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:17:18.003721 systemd[1]: Mounted sysroot.mount. Oct 2 19:17:18.005397 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:17:18.036851 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:17:18.042587 systemd[1]: Starting flatcar-metadata-hostname.service... Oct 2 19:17:18.048528 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:17:18.048572 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:17:18.058229 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:17:18.092677 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:17:18.100717 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:17:18.111925 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (852) Oct 2 19:17:18.111980 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:17:18.120296 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:17:18.120324 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:17:18.123199 initrd-setup-root[857]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:17:18.130328 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:17:18.168388 initrd-setup-root[883]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:17:18.187143 initrd-setup-root[891]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:17:18.191796 initrd-setup-root[899]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:17:18.623320 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:17:18.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:18.628590 systemd[1]: Starting ignition-mount.service... Oct 2 19:17:18.634772 systemd[1]: Starting sysroot-boot.service... Oct 2 19:17:18.641323 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:17:18.641463 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:17:18.664704 systemd[1]: Finished sysroot-boot.service. Oct 2 19:17:18.669332 ignition[918]: INFO : Ignition 2.14.0 Oct 2 19:17:18.669332 ignition[918]: INFO : Stage: mount Oct 2 19:17:18.669332 ignition[918]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:17:18.669332 ignition[918]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:17:18.669332 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:17:18.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:18.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:18.671989 systemd[1]: Finished ignition-mount.service. Oct 2 19:17:18.689718 ignition[918]: INFO : mount: mount passed Oct 2 19:17:18.689718 ignition[918]: INFO : Ignition finished successfully Oct 2 19:17:19.417820 coreos-metadata[851]: Oct 02 19:17:19.417 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 2 19:17:19.434085 coreos-metadata[851]: Oct 02 19:17:19.434 INFO Fetch successful Oct 2 19:17:19.471249 coreos-metadata[851]: Oct 02 19:17:19.471 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 2 19:17:19.489147 coreos-metadata[851]: Oct 02 19:17:19.489 INFO Fetch successful Oct 2 19:17:19.508308 coreos-metadata[851]: Oct 02 19:17:19.508 INFO wrote hostname ci-3510.3.0-a-eb10099fa4 to /sysroot/etc/hostname Oct 2 19:17:19.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:19.510513 systemd[1]: Finished flatcar-metadata-hostname.service. Oct 2 19:17:19.536266 kernel: kauditd_printk_skb: 6 callbacks suppressed Oct 2 19:17:19.536294 kernel: audit: type=1130 audit(1696274239.514:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:19.516492 systemd[1]: Starting ignition-files.service... Oct 2 19:17:19.542857 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:17:19.555925 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (930) Oct 2 19:17:19.564477 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:17:19.564525 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:17:19.564537 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:17:19.573572 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:17:19.586531 ignition[949]: INFO : Ignition 2.14.0 Oct 2 19:17:19.586531 ignition[949]: INFO : Stage: files Oct 2 19:17:19.590183 ignition[949]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:17:19.590183 ignition[949]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:17:19.603189 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:17:19.620660 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:17:19.623710 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:17:19.623710 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:17:19.652678 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:17:19.656491 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:17:19.666955 unknown[949]: wrote ssh authorized keys file for user: core Oct 2 19:17:19.669506 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:17:19.673197 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:17:19.677859 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 19:17:20.104745 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:17:20.282969 ignition[949]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 19:17:20.290714 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:17:20.290714 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:17:20.290714 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1 Oct 2 19:17:20.412695 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:17:20.465396 ignition[949]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df Oct 2 19:17:20.473168 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:17:20.473168 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:17:20.473168 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:17:20.661700 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:17:22.509541 ignition[949]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5 Oct 2 19:17:22.517919 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:17:22.517919 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:17:22.517919 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:17:22.643413 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:17:26.665544 ignition[949]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54 Oct 2 19:17:26.673664 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:17:26.673664 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:17:26.673664 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:17:26.673664 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:17:26.673664 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:17:26.700715 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Oct 2 19:17:26.700715 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:17:26.715877 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3027469228" Oct 2 19:17:26.715877 ignition[949]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3027469228": device or resource busy Oct 2 19:17:26.715877 ignition[949]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3027469228", trying btrfs: device or resource busy Oct 2 19:17:26.715877 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3027469228" Oct 2 19:17:26.743859 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (954) Oct 2 19:17:26.742306 systemd[1]: mnt-oem3027469228.mount: Deactivated successfully. Oct 2 19:17:26.746455 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3027469228" Oct 2 19:17:26.746455 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem3027469228" Oct 2 19:17:26.746455 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem3027469228" Oct 2 19:17:26.746455 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Oct 2 19:17:26.746455 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:17:26.746455 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(d): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:17:26.791804 kernel: audit: type=1130 audit(1696274246.756:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.791835 kernel: audit: type=1130 audit(1696274246.790:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.791959 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem662172116" Oct 2 19:17:26.791959 ignition[949]: CRITICAL : files: createFilesystemsFiles: createFiles: op(d): op(e): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem662172116": device or resource busy Oct 2 19:17:26.791959 ignition[949]: ERROR : files: createFilesystemsFiles: createFiles: op(d): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem662172116", trying btrfs: device or resource busy Oct 2 19:17:26.791959 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem662172116" Oct 2 19:17:26.791959 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(f): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem662172116" Oct 2 19:17:26.791959 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(10): [started] unmounting "/mnt/oem662172116" Oct 2 19:17:26.791959 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(10): [finished] unmounting "/mnt/oem662172116" Oct 2 19:17:26.791959 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:17:26.791959 ignition[949]: INFO : files: op(11): [started] processing unit "waagent.service" Oct 2 19:17:26.791959 ignition[949]: INFO : files: op(11): [finished] processing unit "waagent.service" Oct 2 19:17:26.791959 ignition[949]: INFO : files: op(12): [started] processing unit "nvidia.service" Oct 2 19:17:26.791959 ignition[949]: INFO : files: op(12): [finished] processing unit "nvidia.service" Oct 2 19:17:26.791959 ignition[949]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:17:26.791959 ignition[949]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:17:26.791959 ignition[949]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:17:26.791959 ignition[949]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:17:26.791959 ignition[949]: INFO : files: op(15): [started] processing unit "prepare-critools.service" Oct 2 19:17:26.825734 kernel: audit: type=1131 audit(1696274246.790:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.754930 systemd[1]: Finished ignition-files.service. Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(15): op(16): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(15): op(16): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(15): [finished] processing unit "prepare-critools.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(17): [started] setting preset to enabled for "waagent.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(17): [finished] setting preset to enabled for "waagent.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:17:26.826257 ignition[949]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:17:26.826257 ignition[949]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:17:26.826257 ignition[949]: INFO : files: files passed Oct 2 19:17:26.826257 ignition[949]: INFO : Ignition finished successfully Oct 2 19:17:26.771988 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:17:26.832162 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:17:26.781479 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:17:26.966259 kernel: audit: type=1130 audit(1696274246.941:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.782336 systemd[1]: Starting ignition-quench.service... Oct 2 19:17:26.786679 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:17:26.786784 systemd[1]: Finished ignition-quench.service. Oct 2 19:17:26.811172 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:17:26.941637 systemd[1]: Reached target ignition-complete.target. Oct 2 19:17:26.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.942633 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:17:27.007343 kernel: audit: type=1130 audit(1696274246.976:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.007384 kernel: audit: type=1131 audit(1696274246.976:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.973412 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:17:26.973509 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:17:27.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:26.977845 systemd[1]: Reached target initrd-fs.target. Oct 2 19:17:27.046739 kernel: audit: type=1130 audit(1696274247.031:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.003564 systemd[1]: Reached target initrd.target. Oct 2 19:17:27.007395 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:17:27.008410 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:17:27.027243 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:17:27.050051 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:17:27.059094 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:17:27.062474 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:17:27.067053 systemd[1]: Stopped target timers.target. Oct 2 19:17:27.071480 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:17:27.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.071634 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:17:27.090891 kernel: audit: type=1131 audit(1696274247.074:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.086963 systemd[1]: Stopped target initrd.target. Oct 2 19:17:27.091035 systemd[1]: Stopped target basic.target. Oct 2 19:17:27.094779 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:17:27.098460 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:17:27.102872 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:17:27.109427 systemd[1]: Stopped target remote-fs.target. Oct 2 19:17:27.113427 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:17:27.117423 systemd[1]: Stopped target sysinit.target. Oct 2 19:17:27.121243 systemd[1]: Stopped target local-fs.target. Oct 2 19:17:27.125289 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:17:27.129249 systemd[1]: Stopped target swap.target. Oct 2 19:17:27.132781 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:17:27.135254 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:17:27.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.146941 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:17:27.161105 kernel: audit: type=1131 audit(1696274247.146:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.163248 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:17:27.163396 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:17:27.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.169755 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:17:27.181820 kernel: audit: type=1131 audit(1696274247.169:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.169881 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:17:27.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.188743 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:17:27.189754 systemd[1]: Stopped ignition-files.service. Oct 2 19:17:27.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.192995 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 2 19:17:27.193107 systemd[1]: Stopped flatcar-metadata-hostname.service. Oct 2 19:17:27.198010 systemd[1]: Stopping ignition-mount.service... Oct 2 19:17:27.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.217920 ignition[987]: INFO : Ignition 2.14.0 Oct 2 19:17:27.217920 ignition[987]: INFO : Stage: umount Oct 2 19:17:27.217920 ignition[987]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:17:27.217920 ignition[987]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:17:27.217920 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:17:27.217920 ignition[987]: INFO : umount: umount passed Oct 2 19:17:27.217920 ignition[987]: INFO : Ignition finished successfully Oct 2 19:17:27.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.210980 systemd[1]: Stopping iscsiuio.service... Oct 2 19:17:27.214840 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:17:27.215067 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:17:27.219047 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:17:27.221584 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:17:27.221762 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:17:27.224152 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:17:27.224292 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:17:27.230169 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:17:27.230282 systemd[1]: Stopped iscsiuio.service. Oct 2 19:17:27.235737 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:17:27.235886 systemd[1]: Stopped ignition-mount.service. Oct 2 19:17:27.244165 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:17:27.244254 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:17:27.254446 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:17:27.254505 systemd[1]: Stopped ignition-disks.service. Oct 2 19:17:27.257507 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:17:27.257557 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:17:27.261472 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:17:27.261520 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:17:27.263738 systemd[1]: Stopped target network.target. Oct 2 19:17:27.265669 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:17:27.265726 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:17:27.267799 systemd[1]: Stopped target paths.target. Oct 2 19:17:27.269881 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:17:27.274572 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:17:27.278855 systemd[1]: Stopped target slices.target. Oct 2 19:17:27.324851 systemd[1]: Stopped target sockets.target. Oct 2 19:17:27.328675 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:17:27.328725 systemd[1]: Closed iscsid.socket. Oct 2 19:17:27.333726 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:17:27.333786 systemd[1]: Closed iscsiuio.socket. Oct 2 19:17:27.338793 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:17:27.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.338864 systemd[1]: Stopped ignition-setup.service. Oct 2 19:17:27.343510 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:17:27.346828 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:17:27.349039 systemd-networkd[790]: eth0: DHCPv6 lease lost Oct 2 19:17:27.351871 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:17:27.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.352535 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:17:27.352631 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:17:27.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.360861 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:17:27.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.360978 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:17:27.371000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:17:27.371000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:17:27.367538 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:17:27.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.367638 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:17:27.371694 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:17:27.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.371740 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:17:27.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.375846 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:17:27.375910 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:17:27.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.381099 systemd[1]: Stopping network-cleanup.service... Oct 2 19:17:27.383754 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:17:27.383823 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:17:27.388145 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:17:27.388210 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:17:27.392166 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:17:27.392221 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:17:27.400285 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:17:27.417265 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:17:27.420924 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:17:27.423348 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:17:27.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.428122 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:17:27.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.428190 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:17:27.430303 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:17:27.430356 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:17:27.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.432285 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:17:27.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.432329 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:17:27.434298 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:17:27.434350 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:17:27.436241 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:17:27.436292 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:17:27.445599 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:17:27.447712 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:17:27.447784 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:17:27.453443 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:17:27.453531 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:17:27.507935 kernel: hv_netvsc 000d3ad7-d158-000d-3ad7-d158000d3ad7 eth0: Data path switched from VF: enP60082s1 Oct 2 19:17:27.529264 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:17:27.529391 systemd[1]: Stopped network-cleanup.service. Oct 2 19:17:27.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:27.536749 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:17:27.542340 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:17:27.553548 systemd[1]: Switching root. Oct 2 19:17:27.581426 iscsid[799]: iscsid shutting down. Oct 2 19:17:27.583119 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Oct 2 19:17:27.583182 systemd-journald[183]: Journal stopped Oct 2 19:17:40.062965 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:17:40.062995 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:17:40.063006 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:17:40.063016 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:17:40.063025 kernel: SELinux: policy capability open_perms=1 Oct 2 19:17:40.063033 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:17:40.063045 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:17:40.063058 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:17:40.063067 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:17:40.063078 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:17:40.063263 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:17:40.063275 systemd[1]: Successfully loaded SELinux policy in 310.802ms. Oct 2 19:17:40.063287 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.990ms. Oct 2 19:17:40.063299 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:17:40.063313 systemd[1]: Detected virtualization microsoft. Oct 2 19:17:40.063322 systemd[1]: Detected architecture x86-64. Oct 2 19:17:40.063334 systemd[1]: Detected first boot. Oct 2 19:17:40.063344 systemd[1]: Hostname set to <ci-3510.3.0-a-eb10099fa4>. Oct 2 19:17:40.063355 systemd[1]: Initializing machine ID from random generator. Oct 2 19:17:40.063368 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:17:40.063379 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:17:40.063389 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:17:40.063403 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:17:40.063413 kernel: kauditd_printk_skb: 40 callbacks suppressed Oct 2 19:17:40.063423 kernel: audit: type=1334 audit(1696274259.565:88): prog-id=12 op=LOAD Oct 2 19:17:40.063432 kernel: audit: type=1334 audit(1696274259.565:89): prog-id=3 op=UNLOAD Oct 2 19:17:40.063445 kernel: audit: type=1334 audit(1696274259.569:90): prog-id=13 op=LOAD Oct 2 19:17:40.063456 kernel: audit: type=1334 audit(1696274259.574:91): prog-id=14 op=LOAD Oct 2 19:17:40.063465 kernel: audit: type=1334 audit(1696274259.574:92): prog-id=4 op=UNLOAD Oct 2 19:17:40.063475 kernel: audit: type=1334 audit(1696274259.574:93): prog-id=5 op=UNLOAD Oct 2 19:17:40.063486 kernel: audit: type=1334 audit(1696274259.578:94): prog-id=15 op=LOAD Oct 2 19:17:40.063497 kernel: audit: type=1334 audit(1696274259.578:95): prog-id=12 op=UNLOAD Oct 2 19:17:40.063505 kernel: audit: type=1334 audit(1696274259.597:96): prog-id=16 op=LOAD Oct 2 19:17:40.063517 kernel: audit: type=1334 audit(1696274259.601:97): prog-id=17 op=LOAD Oct 2 19:17:40.063531 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:17:40.063541 systemd[1]: Stopped iscsid.service. Oct 2 19:17:40.063553 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:17:40.063563 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:17:40.063574 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:17:40.063585 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:17:40.063599 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:17:40.063614 systemd[1]: Created slice system-getty.slice. Oct 2 19:17:40.063623 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:17:40.063636 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:17:40.063647 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:17:40.063658 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:17:40.063667 systemd[1]: Created slice user.slice. Oct 2 19:17:40.063679 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:17:40.063690 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:17:40.063701 systemd[1]: Set up automount boot.automount. Oct 2 19:17:40.063714 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:17:40.063727 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:17:40.063739 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:17:40.063749 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:17:40.063761 systemd[1]: Reached target integritysetup.target. Oct 2 19:17:40.063771 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:17:40.063784 systemd[1]: Reached target remote-fs.target. Oct 2 19:17:40.063794 systemd[1]: Reached target slices.target. Oct 2 19:17:40.063808 systemd[1]: Reached target swap.target. Oct 2 19:17:40.063820 systemd[1]: Reached target torcx.target. Oct 2 19:17:40.063830 systemd[1]: Reached target veritysetup.target. Oct 2 19:17:40.063842 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:17:40.063854 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:17:40.063867 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:17:40.063880 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:17:40.063892 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:17:40.063903 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:17:40.063922 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:17:40.063935 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:17:40.063945 systemd[1]: Mounting media.mount... Oct 2 19:17:40.063958 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:17:40.063970 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:17:40.063984 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:17:40.063997 systemd[1]: Mounting tmp.mount... Oct 2 19:17:40.064010 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:17:40.064020 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:17:40.064033 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:17:40.064046 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:17:40.064056 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:17:40.064068 systemd[1]: Starting modprobe@drm.service... Oct 2 19:17:40.064081 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:17:40.064097 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:17:40.064115 systemd[1]: Starting modprobe@loop.service... Oct 2 19:17:40.064137 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:17:40.064157 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:17:40.064175 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:17:40.064194 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:17:40.064215 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:17:40.064235 systemd[1]: Stopped systemd-journald.service. Oct 2 19:17:40.064258 systemd[1]: Starting systemd-journald.service... Oct 2 19:17:40.064278 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:17:40.064295 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:17:40.064314 kernel: loop: module loaded Oct 2 19:17:40.064337 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:17:40.064355 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:17:40.064375 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:17:40.064395 systemd[1]: Stopped verity-setup.service. Oct 2 19:17:40.064417 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:17:40.064441 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:17:40.064461 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:17:40.064484 systemd[1]: Mounted media.mount. Oct 2 19:17:40.064507 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:17:40.064529 systemd-journald[1102]: Journal started Oct 2 19:17:40.064601 systemd-journald[1102]: Runtime Journal (/run/log/journal/1ff939ddfe2544e3b3a5f3405b172544) is 8.0M, max 159.0M, 151.0M free. Oct 2 19:17:29.951000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:17:30.623000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:17:30.639000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:17:30.639000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:17:30.639000 audit: BPF prog-id=10 op=LOAD Oct 2 19:17:30.639000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:17:30.639000 audit: BPF prog-id=11 op=LOAD Oct 2 19:17:30.639000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:17:39.565000 audit: BPF prog-id=12 op=LOAD Oct 2 19:17:39.565000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:17:39.569000 audit: BPF prog-id=13 op=LOAD Oct 2 19:17:39.574000 audit: BPF prog-id=14 op=LOAD Oct 2 19:17:39.574000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:17:39.574000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:17:39.578000 audit: BPF prog-id=15 op=LOAD Oct 2 19:17:39.578000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:17:39.597000 audit: BPF prog-id=16 op=LOAD Oct 2 19:17:39.601000 audit: BPF prog-id=17 op=LOAD Oct 2 19:17:39.601000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:17:39.601000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:17:39.606000 audit: BPF prog-id=18 op=LOAD Oct 2 19:17:39.606000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:17:39.614000 audit: BPF prog-id=19 op=LOAD Oct 2 19:17:39.614000 audit: BPF prog-id=20 op=LOAD Oct 2 19:17:39.615000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:17:39.615000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:17:39.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.630000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:17:39.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.951000 audit: BPF prog-id=21 op=LOAD Oct 2 19:17:39.951000 audit: BPF prog-id=22 op=LOAD Oct 2 19:17:39.951000 audit: BPF prog-id=23 op=LOAD Oct 2 19:17:39.951000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:17:39.951000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:17:40.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.059000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:17:40.059000 audit[1102]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffd112a0d0 a2=4000 a3=7fffd112a16c items=0 ppid=1 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:40.059000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:17:31.855865 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:17:39.564474 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:17:31.856388 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:17:39.616551 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:17:31.856410 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:17:31.856449 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:17:31.856460 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:17:31.856515 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:17:31.856530 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:17:31.856761 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:17:31.856806 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:17:31.856820 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:17:31.857251 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:17:31.857292 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:17:31.857312 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:17:31.857329 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:17:31.857348 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:17:31.857364 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:17:38.433342 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:17:38.433601 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:17:38.433739 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:17:38.433925 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:17:38.433986 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:17:38.434042 /usr/lib/systemd/system-generators/torcx-generator[1020]: time="2023-10-02T19:17:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:17:40.072614 systemd[1]: Started systemd-journald.service. Oct 2 19:17:40.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.073144 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:17:40.077042 kernel: fuse: init (API version 7.34) Oct 2 19:17:40.077424 systemd[1]: Mounted tmp.mount. Oct 2 19:17:40.079520 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:17:40.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.082099 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:17:40.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.084370 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:17:40.084510 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:17:40.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.087417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:17:40.087597 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:17:40.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.090149 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:17:40.090288 systemd[1]: Finished modprobe@drm.service. Oct 2 19:17:40.092653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:17:40.092788 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:17:40.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.101471 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:17:40.101606 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:17:40.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.103828 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:17:40.104210 systemd[1]: Finished modprobe@loop.service. Oct 2 19:17:40.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.106313 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:17:40.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.108816 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:17:40.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.112034 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:17:40.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.114707 systemd[1]: Reached target network-pre.target. Oct 2 19:17:40.117964 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:17:40.122123 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:17:40.124087 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:17:40.140941 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:17:40.144997 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:17:40.147360 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:17:40.149040 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:17:40.151301 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:17:40.153008 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:17:40.157729 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:17:40.165291 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:17:40.172652 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:17:40.183949 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:17:40.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.186440 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:17:40.198465 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:17:40.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.202667 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:17:40.208495 systemd-journald[1102]: Time spent on flushing to /var/log/journal/1ff939ddfe2544e3b3a5f3405b172544 is 31.407ms for 1164 entries. Oct 2 19:17:40.208495 systemd-journald[1102]: System Journal (/var/log/journal/1ff939ddfe2544e3b3a5f3405b172544) is 8.0M, max 2.6G, 2.6G free. Oct 2 19:17:40.291390 systemd-journald[1102]: Received client request to flush runtime journal. Oct 2 19:17:40.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.254370 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:17:40.292452 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:17:40.293187 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:17:40.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.655285 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:17:40.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:41.547273 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:17:41.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:41.549000 audit: BPF prog-id=24 op=LOAD Oct 2 19:17:41.549000 audit: BPF prog-id=25 op=LOAD Oct 2 19:17:41.549000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:17:41.549000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:17:41.551124 systemd[1]: Starting systemd-udevd.service... Oct 2 19:17:41.569954 systemd-udevd[1147]: Using default interface naming scheme 'v252'. Oct 2 19:17:41.818165 systemd[1]: Started systemd-udevd.service. Oct 2 19:17:41.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:41.821000 audit: BPF prog-id=26 op=LOAD Oct 2 19:17:41.824553 systemd[1]: Starting systemd-networkd.service... Oct 2 19:17:41.862368 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:17:41.919000 audit: BPF prog-id=27 op=LOAD Oct 2 19:17:41.920000 audit: BPF prog-id=28 op=LOAD Oct 2 19:17:41.920000 audit: BPF prog-id=29 op=LOAD Oct 2 19:17:41.922167 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:17:41.937954 kernel: hv_vmbus: registering driver hyperv_fb Oct 2 19:17:41.938000 audit[1155]: AVC avc: denied { confidentiality } for pid=1155 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:17:41.947932 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:17:41.959952 kernel: hv_vmbus: registering driver hv_balloon Oct 2 19:17:41.969365 kernel: hv_utils: Registering HyperV Utility Driver Oct 2 19:17:41.969483 kernel: hv_vmbus: registering driver hv_utils Oct 2 19:17:43.069427 kernel: hv_utils: TimeSync IC version 4.0 Oct 2 19:17:43.077613 kernel: hv_utils: Heartbeat IC version 3.0 Oct 2 19:17:43.077652 kernel: hv_utils: Shutdown IC version 3.2 Oct 2 19:17:43.077678 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 2 19:17:43.077703 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 2 19:17:43.083908 systemd[1]: Started systemd-userdbd.service. Oct 2 19:17:43.086566 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 2 19:17:43.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:43.091210 kernel: Console: switching to colour dummy device 80x25 Oct 2 19:17:43.095135 kernel: Console: switching to colour frame buffer device 128x48 Oct 2 19:17:41.938000 audit[1155]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e3f7611b40 a1=f884 a2=7ff292d7abc5 a3=5 items=10 ppid=1147 pid=1155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:41.938000 audit: CWD cwd="/" Oct 2 19:17:41.938000 audit: PATH item=0 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:41.938000 audit: PATH item=1 name=(null) inode=14875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:41.938000 audit: PATH item=2 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:41.938000 audit: PATH item=3 name=(null) inode=14876 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:41.938000 audit: PATH item=4 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:41.938000 audit: PATH item=5 name=(null) inode=14877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:41.938000 audit: PATH item=6 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:41.938000 audit: PATH item=7 name=(null) inode=14878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:41.938000 audit: PATH item=8 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:41.938000 audit: PATH item=9 name=(null) inode=14879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:41.938000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:17:43.304361 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1151) Oct 2 19:17:43.358045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:17:43.381139 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Oct 2 19:17:43.394748 systemd-networkd[1153]: lo: Link UP Oct 2 19:17:43.395094 systemd-networkd[1153]: lo: Gained carrier Oct 2 19:17:43.395817 systemd-networkd[1153]: Enumeration completed Oct 2 19:17:43.396047 systemd[1]: Started systemd-networkd.service. Oct 2 19:17:43.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:43.400339 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:17:43.421681 systemd-networkd[1153]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:17:43.422519 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:17:43.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:43.426298 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:17:43.481155 kernel: mlx5_core eab2:00:02.0 enP60082s1: Link up Oct 2 19:17:43.523154 kernel: hv_netvsc 000d3ad7-d158-000d-3ad7-d158000d3ad7 eth0: Data path switched to VF: enP60082s1 Oct 2 19:17:43.524375 systemd-networkd[1153]: enP60082s1: Link UP Oct 2 19:17:43.524511 systemd-networkd[1153]: eth0: Link UP Oct 2 19:17:43.524517 systemd-networkd[1153]: eth0: Gained carrier Oct 2 19:17:43.529462 systemd-networkd[1153]: enP60082s1: Gained carrier Oct 2 19:17:43.569282 systemd-networkd[1153]: eth0: DHCPv4 address 10.200.8.48/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 2 19:17:43.877721 lvm[1225]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:17:43.904326 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:17:43.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:43.906936 systemd[1]: Reached target cryptsetup.target. Oct 2 19:17:43.910287 systemd[1]: Starting lvm2-activation.service... Oct 2 19:17:43.914956 lvm[1226]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:17:43.935280 systemd[1]: Finished lvm2-activation.service. Oct 2 19:17:43.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:43.937981 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:17:43.940397 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:17:43.940443 systemd[1]: Reached target local-fs.target. Oct 2 19:17:43.942897 systemd[1]: Reached target machines.target. Oct 2 19:17:43.946155 systemd[1]: Starting ldconfig.service... Oct 2 19:17:43.960327 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:17:43.960436 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:17:43.961870 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:17:43.965549 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:17:43.969553 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:17:43.971988 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:17:43.972087 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:17:43.973290 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:17:44.005222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:17:44.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.012959 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1228 (bootctl) Oct 2 19:17:44.014328 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:17:44.797971 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:17:44.801324 systemd-networkd[1153]: eth0: Gained IPv6LL Oct 2 19:17:44.806977 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:17:44.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:45.378284 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:17:45.458314 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:17:45.458987 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:17:45.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:45.474200 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:17:45.728837 systemd-fsck[1236]: fsck.fat 4.2 (2021-01-31) Oct 2 19:17:45.728837 systemd-fsck[1236]: /dev/sda1: 789 files, 115069/258078 clusters Oct 2 19:17:45.727622 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:17:45.735286 kernel: kauditd_printk_skb: 81 callbacks suppressed Oct 2 19:17:45.735403 kernel: audit: type=1130 audit(1696274265.729:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:45.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:45.732157 systemd[1]: Mounting boot.mount... Oct 2 19:17:45.755841 systemd[1]: Mounted boot.mount. Oct 2 19:17:45.769341 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:17:45.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:45.783133 kernel: audit: type=1130 audit(1696274265.770:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.560912 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:17:46.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.565145 systemd[1]: Starting audit-rules.service... Oct 2 19:17:46.576316 kernel: audit: type=1130 audit(1696274266.562:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.578691 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:17:46.582549 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:17:46.585000 audit: BPF prog-id=30 op=LOAD Oct 2 19:17:46.590147 kernel: audit: type=1334 audit(1696274266.585:167): prog-id=30 op=LOAD Oct 2 19:17:46.587308 systemd[1]: Starting systemd-resolved.service... Oct 2 19:17:46.591000 audit: BPF prog-id=31 op=LOAD Oct 2 19:17:46.593917 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:17:46.597150 kernel: audit: type=1334 audit(1696274266.591:168): prog-id=31 op=LOAD Oct 2 19:17:46.600058 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:17:46.618000 audit[1248]: SYSTEM_BOOT pid=1248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.632137 kernel: audit: type=1127 audit(1696274266.618:169): pid=1248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.642207 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:17:46.657312 kernel: audit: type=1130 audit(1696274266.643:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.688849 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:17:46.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.691479 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:17:46.703147 kernel: audit: type=1130 audit(1696274266.690:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.754803 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:17:46.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.757253 systemd[1]: Reached target time-set.target. Oct 2 19:17:46.771929 kernel: audit: type=1130 audit(1696274266.756:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.782694 systemd-resolved[1246]: Positive Trust Anchors: Oct 2 19:17:46.782708 systemd-resolved[1246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:17:46.782746 systemd-resolved[1246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:17:46.810900 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:17:46.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.828151 kernel: audit: type=1130 audit(1696274266.812:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:46.905510 systemd-resolved[1246]: Using system hostname 'ci-3510.3.0-a-eb10099fa4'. Oct 2 19:17:46.906327 augenrules[1263]: No rules Oct 2 19:17:46.905000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:17:46.905000 audit[1263]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe837e0170 a2=420 a3=0 items=0 ppid=1242 pid=1263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:46.905000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:17:46.907964 systemd[1]: Finished audit-rules.service. Oct 2 19:17:46.908000 systemd-timesyncd[1247]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Oct 2 19:17:46.908059 systemd-timesyncd[1247]: Initial clock synchronization to Mon 2023-10-02 19:17:46.907408 UTC. Oct 2 19:17:46.910171 systemd[1]: Started systemd-resolved.service. Oct 2 19:17:46.912460 systemd[1]: Reached target network.target. Oct 2 19:17:46.914460 systemd[1]: Reached target network-online.target. Oct 2 19:17:46.916506 systemd[1]: Reached target nss-lookup.target. Oct 2 19:17:51.836075 ldconfig[1227]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:17:51.862255 systemd[1]: Finished ldconfig.service. Oct 2 19:17:51.866064 systemd[1]: Starting systemd-update-done.service... Oct 2 19:17:51.892008 systemd[1]: Finished systemd-update-done.service. Oct 2 19:17:51.894763 systemd[1]: Reached target sysinit.target. Oct 2 19:17:51.897281 systemd[1]: Started motdgen.path. Oct 2 19:17:51.899428 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:17:51.902524 systemd[1]: Started logrotate.timer. Oct 2 19:17:51.904492 systemd[1]: Started mdadm.timer. Oct 2 19:17:51.906248 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:17:51.908433 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:17:51.908478 systemd[1]: Reached target paths.target. Oct 2 19:17:51.910465 systemd[1]: Reached target timers.target. Oct 2 19:17:51.912806 systemd[1]: Listening on dbus.socket. Oct 2 19:17:51.915988 systemd[1]: Starting docker.socket... Oct 2 19:17:51.933842 systemd[1]: Listening on sshd.socket. Oct 2 19:17:51.935861 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:17:51.936382 systemd[1]: Listening on docker.socket. Oct 2 19:17:51.938215 systemd[1]: Reached target sockets.target. Oct 2 19:17:51.940077 systemd[1]: Reached target basic.target. Oct 2 19:17:51.941898 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:17:51.941928 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:17:51.943036 systemd[1]: Starting containerd.service... Oct 2 19:17:51.947566 systemd[1]: Starting dbus.service... Oct 2 19:17:51.950548 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:17:51.953750 systemd[1]: Starting extend-filesystems.service... Oct 2 19:17:51.955654 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:17:51.957199 systemd[1]: Starting motdgen.service... Oct 2 19:17:51.964536 systemd[1]: Started nvidia.service. Oct 2 19:17:51.967689 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:17:51.970794 systemd[1]: Starting prepare-critools.service... Oct 2 19:17:51.973709 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:17:51.976944 systemd[1]: Starting sshd-keygen.service... Oct 2 19:17:51.983091 systemd[1]: Starting systemd-logind.service... Oct 2 19:17:51.987651 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:17:51.987732 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:17:51.988304 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:17:51.989162 systemd[1]: Starting update-engine.service... Oct 2 19:17:51.993170 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:17:52.000559 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:17:52.000777 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:17:52.063100 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:17:52.065740 jq[1290]: true Oct 2 19:17:52.063362 systemd[1]: Finished motdgen.service. Oct 2 19:17:52.066462 jq[1273]: false Oct 2 19:17:52.066979 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:17:52.067250 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:17:52.075909 extend-filesystems[1274]: Found sda Oct 2 19:17:52.078159 extend-filesystems[1274]: Found sda1 Oct 2 19:17:52.078159 extend-filesystems[1274]: Found sda2 Oct 2 19:17:52.078159 extend-filesystems[1274]: Found sda3 Oct 2 19:17:52.078159 extend-filesystems[1274]: Found usr Oct 2 19:17:52.078159 extend-filesystems[1274]: Found sda4 Oct 2 19:17:52.078159 extend-filesystems[1274]: Found sda6 Oct 2 19:17:52.078159 extend-filesystems[1274]: Found sda7 Oct 2 19:17:52.078159 extend-filesystems[1274]: Found sda9 Oct 2 19:17:52.078159 extend-filesystems[1274]: Checking size of /dev/sda9 Oct 2 19:17:52.122001 tar[1295]: crictl Oct 2 19:17:52.122394 tar[1293]: ./ Oct 2 19:17:52.122394 tar[1293]: ./macvlan Oct 2 19:17:52.122700 jq[1304]: true Oct 2 19:17:52.123548 systemd-logind[1286]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:17:52.129672 systemd-logind[1286]: New seat seat0. Oct 2 19:17:52.175211 extend-filesystems[1274]: Old size kept for /dev/sda9 Oct 2 19:17:52.179606 extend-filesystems[1274]: Found sr0 Oct 2 19:17:52.175768 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:17:52.175930 systemd[1]: Finished extend-filesystems.service. Oct 2 19:17:52.207991 env[1300]: time="2023-10-02T19:17:52.207935624Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:17:52.242778 dbus-daemon[1272]: [system] SELinux support is enabled Oct 2 19:17:52.242979 systemd[1]: Started dbus.service. Oct 2 19:17:52.247736 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:17:52.247773 systemd[1]: Reached target system-config.target. Oct 2 19:17:52.250151 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:17:52.250176 systemd[1]: Reached target user-config.target. Oct 2 19:17:52.253155 systemd[1]: Started systemd-logind.service. Oct 2 19:17:52.256471 dbus-daemon[1272]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 19:17:52.266156 tar[1293]: ./static Oct 2 19:17:52.303890 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:17:52.311523 bash[1336]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:17:52.312317 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:17:52.343540 tar[1293]: ./vlan Oct 2 19:17:52.347607 env[1300]: time="2023-10-02T19:17:52.347544514Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:17:52.349709 env[1300]: time="2023-10-02T19:17:52.349680568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:52.354474 env[1300]: time="2023-10-02T19:17:52.354425666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:17:52.354598 env[1300]: time="2023-10-02T19:17:52.354580762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:52.354990 env[1300]: time="2023-10-02T19:17:52.354963054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:17:52.355105 env[1300]: time="2023-10-02T19:17:52.355089051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:52.355240 env[1300]: time="2023-10-02T19:17:52.355220449Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:17:52.355312 env[1300]: time="2023-10-02T19:17:52.355299247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:52.355513 env[1300]: time="2023-10-02T19:17:52.355460043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:52.355911 env[1300]: time="2023-10-02T19:17:52.355890934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:52.356223 env[1300]: time="2023-10-02T19:17:52.356188628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:17:52.356327 env[1300]: time="2023-10-02T19:17:52.356311225Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:17:52.358225 env[1300]: time="2023-10-02T19:17:52.358202084Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:17:52.358331 env[1300]: time="2023-10-02T19:17:52.358313482Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.381801475Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.381874174Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.381896073Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.381984571Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.382060170Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.382095769Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.382136868Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.382160168Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.382180367Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.382198967Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.382228466Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.382245966Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.382409562Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:17:52.382780 env[1300]: time="2023-10-02T19:17:52.382532360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383606336Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383650636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383687335Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383763733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383782233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383803632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383883630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383901330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383919630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383933529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383957829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.383974929Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.384171524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.384208723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385037 env[1300]: time="2023-10-02T19:17:52.384227123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385608 env[1300]: time="2023-10-02T19:17:52.384243923Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:17:52.385608 env[1300]: time="2023-10-02T19:17:52.384274822Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:17:52.385608 env[1300]: time="2023-10-02T19:17:52.384289722Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:17:52.385608 env[1300]: time="2023-10-02T19:17:52.384313021Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:17:52.385608 env[1300]: time="2023-10-02T19:17:52.384363720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:17:52.385786 env[1300]: time="2023-10-02T19:17:52.384667414Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:17:52.385786 env[1300]: time="2023-10-02T19:17:52.384750512Z" level=info msg="Connect containerd service" Oct 2 19:17:52.385786 env[1300]: time="2023-10-02T19:17:52.384788511Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:17:52.416874 env[1300]: time="2023-10-02T19:17:52.387027963Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:17:52.416874 env[1300]: time="2023-10-02T19:17:52.387197959Z" level=info msg="Start subscribing containerd event" Oct 2 19:17:52.416874 env[1300]: time="2023-10-02T19:17:52.387264858Z" level=info msg="Start recovering state" Oct 2 19:17:52.416874 env[1300]: time="2023-10-02T19:17:52.387342956Z" level=info msg="Start event monitor" Oct 2 19:17:52.416874 env[1300]: time="2023-10-02T19:17:52.387361856Z" level=info msg="Start snapshots syncer" Oct 2 19:17:52.416874 env[1300]: time="2023-10-02T19:17:52.387372355Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:17:52.416874 env[1300]: time="2023-10-02T19:17:52.387381055Z" level=info msg="Start streaming server" Oct 2 19:17:52.416874 env[1300]: time="2023-10-02T19:17:52.387892744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:17:52.416874 env[1300]: time="2023-10-02T19:17:52.387964343Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:17:52.416874 env[1300]: time="2023-10-02T19:17:52.410493857Z" level=info msg="containerd successfully booted in 0.209875s" Oct 2 19:17:52.407391 systemd[1]: Started containerd.service. Oct 2 19:17:52.417357 tar[1293]: ./portmap Oct 2 19:17:52.505305 tar[1293]: ./host-local Oct 2 19:17:52.553311 tar[1293]: ./vrf Oct 2 19:17:52.591393 tar[1293]: ./bridge Oct 2 19:17:52.637355 tar[1293]: ./tuning Oct 2 19:17:52.676542 tar[1293]: ./firewall Oct 2 19:17:52.724349 tar[1293]: ./host-device Oct 2 19:17:52.767248 tar[1293]: ./sbr Oct 2 19:17:52.805770 tar[1293]: ./loopback Oct 2 19:17:52.833552 update_engine[1287]: I1002 19:17:52.833034 1287 main.cc:92] Flatcar Update Engine starting Oct 2 19:17:52.843268 tar[1293]: ./dhcp Oct 2 19:17:52.883022 systemd[1]: Started update-engine.service. Oct 2 19:17:52.891628 update_engine[1287]: I1002 19:17:52.883095 1287 update_check_scheduler.cc:74] Next update check in 11m34s Oct 2 19:17:52.888542 systemd[1]: Started locksmithd.service. Oct 2 19:17:52.931822 systemd[1]: Finished prepare-critools.service. Oct 2 19:17:52.973440 tar[1293]: ./ptp Oct 2 19:17:53.015830 tar[1293]: ./ipvlan Oct 2 19:17:53.056694 tar[1293]: ./bandwidth Oct 2 19:17:53.136895 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:17:53.923237 sshd_keygen[1294]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:17:53.943465 systemd[1]: Finished sshd-keygen.service. Oct 2 19:17:53.947709 systemd[1]: Starting issuegen.service... Oct 2 19:17:53.951209 systemd[1]: Started waagent.service. Oct 2 19:17:53.957683 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:17:53.957854 systemd[1]: Finished issuegen.service. Oct 2 19:17:53.961651 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:17:53.985177 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:17:53.989269 systemd[1]: Started getty@tty1.service. Oct 2 19:17:53.992959 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:17:53.995841 systemd[1]: Reached target getty.target. Oct 2 19:17:53.998126 systemd[1]: Reached target multi-user.target. Oct 2 19:17:54.002007 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:17:54.010565 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:17:54.010717 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:17:54.013383 systemd[1]: Startup finished in 936ms (firmware) + 26.588s (loader) + 921ms (kernel) + 19.700s (initrd) + 23.504s (userspace) = 1min 11.650s. Oct 2 19:17:54.375158 login[1395]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 19:17:54.376659 login[1396]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 19:17:54.398543 systemd[1]: Created slice user-500.slice. Oct 2 19:17:54.400137 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:17:54.402500 systemd-logind[1286]: New session 1 of user core. Oct 2 19:17:54.408255 systemd-logind[1286]: New session 2 of user core. Oct 2 19:17:54.424511 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:17:54.425304 locksmithd[1374]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:17:54.426620 systemd[1]: Starting user@500.service... Oct 2 19:17:54.429930 (systemd)[1399]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:17:54.655215 systemd[1399]: Queued start job for default target default.target. Oct 2 19:17:54.655976 systemd[1399]: Reached target paths.target. Oct 2 19:17:54.656014 systemd[1399]: Reached target sockets.target. Oct 2 19:17:54.656036 systemd[1399]: Reached target timers.target. Oct 2 19:17:54.656055 systemd[1399]: Reached target basic.target. Oct 2 19:17:54.656139 systemd[1399]: Reached target default.target. Oct 2 19:17:54.656189 systemd[1399]: Startup finished in 220ms. Oct 2 19:17:54.656521 systemd[1]: Started user@500.service. Oct 2 19:17:54.658178 systemd[1]: Started session-1.scope. Oct 2 19:17:54.659209 systemd[1]: Started session-2.scope. Oct 2 19:18:00.929352 waagent[1390]: 2023-10-02T19:18:00.929207Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Oct 2 19:18:00.941884 waagent[1390]: 2023-10-02T19:18:00.930566Z INFO Daemon Daemon OS: flatcar 3510.3.0 Oct 2 19:18:00.941884 waagent[1390]: 2023-10-02T19:18:00.931455Z INFO Daemon Daemon Python: 3.9.16 Oct 2 19:18:00.941884 waagent[1390]: 2023-10-02T19:18:00.932868Z INFO Daemon Daemon Run daemon Oct 2 19:18:00.941884 waagent[1390]: 2023-10-02T19:18:00.933968Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.0' Oct 2 19:18:00.946309 waagent[1390]: 2023-10-02T19:18:00.946173Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Oct 2 19:18:00.954223 waagent[1390]: 2023-10-02T19:18:00.954087Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 2 19:18:00.976194 waagent[1390]: 2023-10-02T19:18:00.954697Z INFO Daemon Daemon cloud-init is enabled: False Oct 2 19:18:00.976194 waagent[1390]: 2023-10-02T19:18:00.955323Z INFO Daemon Daemon Using waagent for provisioning Oct 2 19:18:00.976194 waagent[1390]: 2023-10-02T19:18:00.956749Z INFO Daemon Daemon Activate resource disk Oct 2 19:18:00.976194 waagent[1390]: 2023-10-02T19:18:00.957534Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 2 19:18:00.976194 waagent[1390]: 2023-10-02T19:18:00.965356Z INFO Daemon Daemon Found device: None Oct 2 19:18:00.976194 waagent[1390]: 2023-10-02T19:18:00.965999Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 2 19:18:00.976194 waagent[1390]: 2023-10-02T19:18:00.966830Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 2 19:18:00.976194 waagent[1390]: 2023-10-02T19:18:00.968187Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 2 19:18:00.976194 waagent[1390]: 2023-10-02T19:18:00.969060Z INFO Daemon Daemon Running default provisioning handler Oct 2 19:18:00.988256 waagent[1390]: 2023-10-02T19:18:00.988084Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Oct 2 19:18:01.001817 waagent[1390]: 2023-10-02T19:18:00.990890Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 2 19:18:01.001817 waagent[1390]: 2023-10-02T19:18:00.992036Z INFO Daemon Daemon cloud-init is enabled: False Oct 2 19:18:01.001817 waagent[1390]: 2023-10-02T19:18:00.992838Z INFO Daemon Daemon Copying ovf-env.xml Oct 2 19:18:01.093852 waagent[1390]: 2023-10-02T19:18:01.093685Z INFO Daemon Daemon Successfully mounted dvd Oct 2 19:18:01.171078 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 2 19:18:01.194210 waagent[1390]: 2023-10-02T19:18:01.194054Z INFO Daemon Daemon Detect protocol endpoint Oct 2 19:18:01.203827 waagent[1390]: 2023-10-02T19:18:01.203720Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 2 19:18:01.206812 waagent[1390]: 2023-10-02T19:18:01.206728Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 2 19:18:01.210192 waagent[1390]: 2023-10-02T19:18:01.210097Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 2 19:18:01.213048 waagent[1390]: 2023-10-02T19:18:01.212972Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 2 19:18:01.215693 waagent[1390]: 2023-10-02T19:18:01.215625Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 2 19:18:01.361849 waagent[1390]: 2023-10-02T19:18:01.361768Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 2 19:18:01.370675 waagent[1390]: 2023-10-02T19:18:01.362723Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 2 19:18:01.370675 waagent[1390]: 2023-10-02T19:18:01.363800Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 2 19:18:01.639673 waagent[1390]: 2023-10-02T19:18:01.639465Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 2 19:18:01.653887 waagent[1390]: 2023-10-02T19:18:01.653796Z INFO Daemon Daemon Forcing an update of the goal state.. Oct 2 19:18:01.659369 waagent[1390]: 2023-10-02T19:18:01.654319Z INFO Daemon Daemon Fetching goal state [incarnation 1] Oct 2 19:18:01.738739 waagent[1390]: 2023-10-02T19:18:01.738605Z INFO Daemon Daemon Found private key matching thumbprint 8C568492A125F1299BB22012597070D56F40E5E3 Oct 2 19:18:01.743860 waagent[1390]: 2023-10-02T19:18:01.743768Z INFO Daemon Daemon Certificate with thumbprint 90A04A0267B1E5E59C5CB55534A0F3DA65AF0DC7 has no matching private key. Oct 2 19:18:01.748967 waagent[1390]: 2023-10-02T19:18:01.748881Z INFO Daemon Daemon Fetch goal state completed Oct 2 19:18:01.804093 waagent[1390]: 2023-10-02T19:18:01.803989Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: a5eeee9b-1fb9-42f9-8abb-fc545f79a607 New eTag: 16977732897078685077] Oct 2 19:18:01.810231 waagent[1390]: 2023-10-02T19:18:01.810142Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Oct 2 19:18:01.825090 waagent[1390]: 2023-10-02T19:18:01.825002Z INFO Daemon Daemon Starting provisioning Oct 2 19:18:01.827855 waagent[1390]: 2023-10-02T19:18:01.827777Z INFO Daemon Daemon Handle ovf-env.xml. Oct 2 19:18:01.830299 waagent[1390]: 2023-10-02T19:18:01.830235Z INFO Daemon Daemon Set hostname [ci-3510.3.0-a-eb10099fa4] Oct 2 19:18:01.850823 waagent[1390]: 2023-10-02T19:18:01.850663Z INFO Daemon Daemon Publish hostname [ci-3510.3.0-a-eb10099fa4] Oct 2 19:18:01.854770 waagent[1390]: 2023-10-02T19:18:01.854663Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 2 19:18:01.858436 waagent[1390]: 2023-10-02T19:18:01.858349Z INFO Daemon Daemon Primary interface is [eth0] Oct 2 19:18:01.873055 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Oct 2 19:18:01.873331 systemd[1]: Stopped systemd-networkd-wait-online.service. Oct 2 19:18:01.873402 systemd[1]: Stopping systemd-networkd-wait-online.service... Oct 2 19:18:01.873747 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:18:01.877168 systemd-networkd[1153]: eth0: DHCPv6 lease lost Oct 2 19:18:01.878575 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:18:01.878775 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:18:01.881359 systemd[1]: Starting systemd-networkd.service... Oct 2 19:18:01.912195 systemd-networkd[1448]: enP60082s1: Link UP Oct 2 19:18:01.912203 systemd-networkd[1448]: enP60082s1: Gained carrier Oct 2 19:18:01.913592 systemd-networkd[1448]: eth0: Link UP Oct 2 19:18:01.913602 systemd-networkd[1448]: eth0: Gained carrier Oct 2 19:18:01.914036 systemd-networkd[1448]: lo: Link UP Oct 2 19:18:01.914046 systemd-networkd[1448]: lo: Gained carrier Oct 2 19:18:01.914368 systemd-networkd[1448]: eth0: Gained IPv6LL Oct 2 19:18:01.915491 systemd-networkd[1448]: Enumeration completed Oct 2 19:18:01.915610 systemd[1]: Started systemd-networkd.service. Oct 2 19:18:01.919383 waagent[1390]: 2023-10-02T19:18:01.917086Z INFO Daemon Daemon Create user account if not exists Oct 2 19:18:01.919383 waagent[1390]: 2023-10-02T19:18:01.917881Z INFO Daemon Daemon User core already exists, skip useradd Oct 2 19:18:01.919383 waagent[1390]: 2023-10-02T19:18:01.918766Z INFO Daemon Daemon Configure sudoer Oct 2 19:18:01.920166 waagent[1390]: 2023-10-02T19:18:01.920085Z INFO Daemon Daemon Configure sshd Oct 2 19:18:01.921139 waagent[1390]: 2023-10-02T19:18:01.921073Z INFO Daemon Daemon Deploy ssh public key. Oct 2 19:18:01.929718 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:18:01.933659 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:18:01.964227 systemd-networkd[1448]: eth0: DHCPv4 address 10.200.8.48/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 2 19:18:01.968328 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:18:03.322469 waagent[1390]: 2023-10-02T19:18:03.322374Z INFO Daemon Daemon Provisioning complete Oct 2 19:18:03.340283 waagent[1390]: 2023-10-02T19:18:03.340209Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 2 19:18:03.343853 waagent[1390]: 2023-10-02T19:18:03.343778Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 2 19:18:03.349544 waagent[1390]: 2023-10-02T19:18:03.349476Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Oct 2 19:18:03.622167 waagent[1457]: 2023-10-02T19:18:03.621971Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Oct 2 19:18:03.622914 waagent[1457]: 2023-10-02T19:18:03.622843Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:18:03.623059 waagent[1457]: 2023-10-02T19:18:03.623004Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:18:03.634482 waagent[1457]: 2023-10-02T19:18:03.634404Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Oct 2 19:18:03.634665 waagent[1457]: 2023-10-02T19:18:03.634609Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Oct 2 19:18:03.698006 waagent[1457]: 2023-10-02T19:18:03.697875Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8C568492A125F1299BB22012597070D56F40E5E3 Oct 2 19:18:03.698270 waagent[1457]: 2023-10-02T19:18:03.698202Z INFO ExtHandler ExtHandler Certificate with thumbprint 90A04A0267B1E5E59C5CB55534A0F3DA65AF0DC7 has no matching private key. Oct 2 19:18:03.698525 waagent[1457]: 2023-10-02T19:18:03.698473Z INFO ExtHandler ExtHandler Fetch goal state completed Oct 2 19:18:03.713460 waagent[1457]: 2023-10-02T19:18:03.713396Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: ec33d234-1bf4-4ef7-8dbd-e90d45e72f9c New eTag: 16977732897078685077] Oct 2 19:18:03.714076 waagent[1457]: 2023-10-02T19:18:03.714014Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Oct 2 19:18:03.811670 waagent[1457]: 2023-10-02T19:18:03.811503Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.0; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Oct 2 19:18:03.821856 waagent[1457]: 2023-10-02T19:18:03.821759Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1457 Oct 2 19:18:03.825332 waagent[1457]: 2023-10-02T19:18:03.825262Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 2 19:18:03.826573 waagent[1457]: 2023-10-02T19:18:03.826513Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 2 19:18:03.922219 waagent[1457]: 2023-10-02T19:18:03.922078Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 2 19:18:03.922599 waagent[1457]: 2023-10-02T19:18:03.922527Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 2 19:18:03.930748 waagent[1457]: 2023-10-02T19:18:03.930688Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 2 19:18:03.931291 waagent[1457]: 2023-10-02T19:18:03.931227Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Oct 2 19:18:03.932413 waagent[1457]: 2023-10-02T19:18:03.932344Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Oct 2 19:18:03.933706 waagent[1457]: 2023-10-02T19:18:03.933646Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 2 19:18:03.934691 waagent[1457]: 2023-10-02T19:18:03.934636Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:18:03.934772 waagent[1457]: 2023-10-02T19:18:03.934719Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:18:03.934938 waagent[1457]: 2023-10-02T19:18:03.934884Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 2 19:18:03.935046 waagent[1457]: 2023-10-02T19:18:03.934994Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:18:03.935691 waagent[1457]: 2023-10-02T19:18:03.935635Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 2 19:18:03.936619 waagent[1457]: 2023-10-02T19:18:03.936559Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 2 19:18:03.936863 waagent[1457]: 2023-10-02T19:18:03.936806Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 2 19:18:03.936863 waagent[1457]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 2 19:18:03.936863 waagent[1457]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Oct 2 19:18:03.936863 waagent[1457]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 2 19:18:03.936863 waagent[1457]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:18:03.936863 waagent[1457]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:18:03.936863 waagent[1457]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:18:03.937183 waagent[1457]: 2023-10-02T19:18:03.936886Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 2 19:18:03.937527 waagent[1457]: 2023-10-02T19:18:03.937465Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 2 19:18:03.937825 waagent[1457]: 2023-10-02T19:18:03.937776Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:18:03.937911 waagent[1457]: 2023-10-02T19:18:03.937855Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 2 19:18:03.939056 waagent[1457]: 2023-10-02T19:18:03.939002Z INFO EnvHandler ExtHandler Configure routes Oct 2 19:18:03.942064 waagent[1457]: 2023-10-02T19:18:03.941857Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 2 19:18:03.942386 waagent[1457]: 2023-10-02T19:18:03.942312Z INFO EnvHandler ExtHandler Gateway:None Oct 2 19:18:03.944350 waagent[1457]: 2023-10-02T19:18:03.944300Z INFO EnvHandler ExtHandler Routes:None Oct 2 19:18:03.954178 waagent[1457]: 2023-10-02T19:18:03.954104Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Oct 2 19:18:03.954784 waagent[1457]: 2023-10-02T19:18:03.954735Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Oct 2 19:18:03.955783 waagent[1457]: 2023-10-02T19:18:03.955720Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Oct 2 19:18:03.971163 waagent[1457]: 2023-10-02T19:18:03.971077Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1448' Oct 2 19:18:04.015469 waagent[1457]: 2023-10-02T19:18:04.015389Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Oct 2 19:18:04.054098 waagent[1457]: 2023-10-02T19:18:04.053964Z INFO MonitorHandler ExtHandler Network interfaces: Oct 2 19:18:04.054098 waagent[1457]: Executing ['ip', '-a', '-o', 'link']: Oct 2 19:18:04.054098 waagent[1457]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 2 19:18:04.054098 waagent[1457]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d7:d1:58 brd ff:ff:ff:ff:ff:ff Oct 2 19:18:04.054098 waagent[1457]: 3: enP60082s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d7:d1:58 brd ff:ff:ff:ff:ff:ff\ altname enP60082p0s2 Oct 2 19:18:04.054098 waagent[1457]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 2 19:18:04.054098 waagent[1457]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 2 19:18:04.054098 waagent[1457]: 2: eth0 inet 10.200.8.48/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 2 19:18:04.054098 waagent[1457]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 2 19:18:04.054098 waagent[1457]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Oct 2 19:18:04.054098 waagent[1457]: 2: eth0 inet6 fe80::20d:3aff:fed7:d158/64 scope link \ valid_lft forever preferred_lft forever Oct 2 19:18:04.303761 waagent[1457]: 2023-10-02T19:18:04.303634Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Oct 2 19:18:04.306974 waagent[1457]: 2023-10-02T19:18:04.306863Z INFO EnvHandler ExtHandler Firewall rules: Oct 2 19:18:04.306974 waagent[1457]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:18:04.306974 waagent[1457]: pkts bytes target prot opt in out source destination Oct 2 19:18:04.306974 waagent[1457]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:18:04.306974 waagent[1457]: pkts bytes target prot opt in out source destination Oct 2 19:18:04.306974 waagent[1457]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:18:04.306974 waagent[1457]: pkts bytes target prot opt in out source destination Oct 2 19:18:04.306974 waagent[1457]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 2 19:18:04.306974 waagent[1457]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 2 19:18:04.308385 waagent[1457]: 2023-10-02T19:18:04.308327Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Oct 2 19:18:04.424254 waagent[1457]: 2023-10-02T19:18:04.424174Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.10.0.3 -- exiting Oct 2 19:18:05.353626 waagent[1390]: 2023-10-02T19:18:05.353438Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Oct 2 19:18:05.359933 waagent[1390]: 2023-10-02T19:18:05.359862Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.10.0.3 to be the latest agent Oct 2 19:18:06.386655 waagent[1497]: 2023-10-02T19:18:06.386541Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.10.0.3) Oct 2 19:18:06.387437 waagent[1497]: 2023-10-02T19:18:06.387344Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.0 Oct 2 19:18:06.387567 waagent[1497]: 2023-10-02T19:18:06.387512Z INFO ExtHandler ExtHandler Python: 3.9.16 Oct 2 19:18:06.397264 waagent[1497]: 2023-10-02T19:18:06.397159Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.0; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Oct 2 19:18:06.397679 waagent[1497]: 2023-10-02T19:18:06.397617Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:18:06.397845 waagent[1497]: 2023-10-02T19:18:06.397795Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:18:06.409714 waagent[1497]: 2023-10-02T19:18:06.409626Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 2 19:18:06.423036 waagent[1497]: 2023-10-02T19:18:06.422965Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Oct 2 19:18:06.424045 waagent[1497]: 2023-10-02T19:18:06.423978Z INFO ExtHandler Oct 2 19:18:06.424218 waagent[1497]: 2023-10-02T19:18:06.424164Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3c1ea307-b878-49d2-a021-afdfe727840d eTag: 16977732897078685077 source: Fabric] Oct 2 19:18:06.424922 waagent[1497]: 2023-10-02T19:18:06.424862Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 2 19:18:06.426028 waagent[1497]: 2023-10-02T19:18:06.425964Z INFO ExtHandler Oct 2 19:18:06.426177 waagent[1497]: 2023-10-02T19:18:06.426108Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 2 19:18:06.433492 waagent[1497]: 2023-10-02T19:18:06.433438Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 2 19:18:06.433943 waagent[1497]: 2023-10-02T19:18:06.433894Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Oct 2 19:18:06.455919 waagent[1497]: 2023-10-02T19:18:06.455837Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Oct 2 19:18:06.523484 waagent[1497]: 2023-10-02T19:18:06.523352Z INFO ExtHandler Downloaded certificate {'thumbprint': '90A04A0267B1E5E59C5CB55534A0F3DA65AF0DC7', 'hasPrivateKey': False} Oct 2 19:18:06.524521 waagent[1497]: 2023-10-02T19:18:06.524452Z INFO ExtHandler Downloaded certificate {'thumbprint': '8C568492A125F1299BB22012597070D56F40E5E3', 'hasPrivateKey': True} Oct 2 19:18:06.525522 waagent[1497]: 2023-10-02T19:18:06.525456Z INFO ExtHandler Fetch goal state completed Oct 2 19:18:06.550087 waagent[1497]: 2023-10-02T19:18:06.549994Z INFO ExtHandler ExtHandler WALinuxAgent-2.10.0.3 running as process 1497 Oct 2 19:18:06.553508 waagent[1497]: 2023-10-02T19:18:06.553439Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 2 19:18:06.554977 waagent[1497]: 2023-10-02T19:18:06.554916Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 2 19:18:06.560189 waagent[1497]: 2023-10-02T19:18:06.560111Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 2 19:18:06.560572 waagent[1497]: 2023-10-02T19:18:06.560513Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 2 19:18:06.568690 waagent[1497]: 2023-10-02T19:18:06.568633Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 2 19:18:06.569178 waagent[1497]: 2023-10-02T19:18:06.569104Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Oct 2 19:18:06.592542 waagent[1497]: 2023-10-02T19:18:06.592402Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Oct 2 19:18:06.595801 waagent[1497]: 2023-10-02T19:18:06.595687Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Oct 2 19:18:06.601362 waagent[1497]: 2023-10-02T19:18:06.601295Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Oct 2 19:18:06.602844 waagent[1497]: 2023-10-02T19:18:06.602782Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 2 19:18:06.603698 waagent[1497]: 2023-10-02T19:18:06.603640Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 2 19:18:06.604072 waagent[1497]: 2023-10-02T19:18:06.604014Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:18:06.604312 waagent[1497]: 2023-10-02T19:18:06.604257Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:18:06.604492 waagent[1497]: 2023-10-02T19:18:06.604428Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:18:06.604649 waagent[1497]: 2023-10-02T19:18:06.604601Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:18:06.605023 waagent[1497]: 2023-10-02T19:18:06.604961Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 2 19:18:06.605217 waagent[1497]: 2023-10-02T19:18:06.605165Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 2 19:18:06.606002 waagent[1497]: 2023-10-02T19:18:06.605944Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 2 19:18:06.606485 waagent[1497]: 2023-10-02T19:18:06.606426Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 2 19:18:06.606933 waagent[1497]: 2023-10-02T19:18:06.606877Z INFO EnvHandler ExtHandler Configure routes Oct 2 19:18:06.607342 waagent[1497]: 2023-10-02T19:18:06.607286Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 2 19:18:06.607342 waagent[1497]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 2 19:18:06.607342 waagent[1497]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Oct 2 19:18:06.607342 waagent[1497]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 2 19:18:06.607342 waagent[1497]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:18:06.607342 waagent[1497]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:18:06.607342 waagent[1497]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:18:06.607636 waagent[1497]: 2023-10-02T19:18:06.607366Z INFO EnvHandler ExtHandler Gateway:None Oct 2 19:18:06.607636 waagent[1497]: 2023-10-02T19:18:06.607520Z INFO EnvHandler ExtHandler Routes:None Oct 2 19:18:06.612437 waagent[1497]: 2023-10-02T19:18:06.612238Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 2 19:18:06.613359 waagent[1497]: 2023-10-02T19:18:06.613294Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 2 19:18:06.645033 waagent[1497]: 2023-10-02T19:18:06.644903Z INFO ExtHandler ExtHandler Downloading agent manifest Oct 2 19:18:06.645196 waagent[1497]: 2023-10-02T19:18:06.645087Z INFO MonitorHandler ExtHandler Network interfaces: Oct 2 19:18:06.645196 waagent[1497]: Executing ['ip', '-a', '-o', 'link']: Oct 2 19:18:06.645196 waagent[1497]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 2 19:18:06.645196 waagent[1497]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d7:d1:58 brd ff:ff:ff:ff:ff:ff Oct 2 19:18:06.645196 waagent[1497]: 3: enP60082s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d7:d1:58 brd ff:ff:ff:ff:ff:ff\ altname enP60082p0s2 Oct 2 19:18:06.645196 waagent[1497]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 2 19:18:06.645196 waagent[1497]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 2 19:18:06.645196 waagent[1497]: 2: eth0 inet 10.200.8.48/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 2 19:18:06.645196 waagent[1497]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 2 19:18:06.645196 waagent[1497]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Oct 2 19:18:06.645196 waagent[1497]: 2: eth0 inet6 fe80::20d:3aff:fed7:d158/64 scope link \ valid_lft forever preferred_lft forever Oct 2 19:18:06.705345 waagent[1497]: 2023-10-02T19:18:06.705275Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 2 19:18:06.705345 waagent[1497]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:18:06.705345 waagent[1497]: pkts bytes target prot opt in out source destination Oct 2 19:18:06.705345 waagent[1497]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:18:06.705345 waagent[1497]: pkts bytes target prot opt in out source destination Oct 2 19:18:06.705345 waagent[1497]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:18:06.705345 waagent[1497]: pkts bytes target prot opt in out source destination Oct 2 19:18:06.705345 waagent[1497]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 2 19:18:06.705345 waagent[1497]: 124 14145 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 2 19:18:06.705345 waagent[1497]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 2 19:18:06.712565 waagent[1497]: 2023-10-02T19:18:06.712493Z INFO ExtHandler ExtHandler Oct 2 19:18:06.713054 waagent[1497]: 2023-10-02T19:18:06.712996Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 9b3da32e-47b8-4853-b26a-91e34fa1a00f correlation 2e66cd00-9b1e-490e-b045-cbf7f9af2c32 created: 2023-10-02T19:16:31.057808Z] Oct 2 19:18:06.716564 waagent[1497]: 2023-10-02T19:18:06.716501Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 2 19:18:06.719259 waagent[1497]: 2023-10-02T19:18:06.719202Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms] Oct 2 19:18:06.742605 waagent[1497]: 2023-10-02T19:18:06.742539Z INFO ExtHandler ExtHandler Looking for existing remote access users. Oct 2 19:18:06.752710 waagent[1497]: 2023-10-02T19:18:06.752625Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.10.0.3 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C50AE2CB-F344-494F-B644-51072BD02E27;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Oct 2 19:18:31.183209 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Oct 2 19:18:37.808672 systemd[1]: Created slice system-sshd.slice. Oct 2 19:18:37.810592 systemd[1]: Started sshd@0-10.200.8.48:22-10.200.12.6:34052.service. Oct 2 19:18:37.913231 update_engine[1287]: I1002 19:18:37.913183 1287 update_attempter.cc:505] Updating boot flags... Oct 2 19:18:38.601263 sshd[1536]: Accepted publickey for core from 10.200.12.6 port 34052 ssh2: RSA SHA256:gmG02UXHBKapD9vqiBZ3w7SUJvWJJQwqYxETXcCINW8 Oct 2 19:18:38.602966 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:38.609036 systemd[1]: Started session-3.scope. Oct 2 19:18:38.609637 systemd-logind[1286]: New session 3 of user core. Oct 2 19:18:39.141052 systemd[1]: Started sshd@1-10.200.8.48:22-10.200.12.6:34064.service. Oct 2 19:18:39.767333 sshd[1580]: Accepted publickey for core from 10.200.12.6 port 34064 ssh2: RSA SHA256:gmG02UXHBKapD9vqiBZ3w7SUJvWJJQwqYxETXcCINW8 Oct 2 19:18:39.769013 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:39.774566 systemd[1]: Started session-4.scope. Oct 2 19:18:39.775003 systemd-logind[1286]: New session 4 of user core. Oct 2 19:18:40.218380 sshd[1580]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:40.221682 systemd[1]: sshd@1-10.200.8.48:22-10.200.12.6:34064.service: Deactivated successfully. Oct 2 19:18:40.222737 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:18:40.223510 systemd-logind[1286]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:18:40.224367 systemd-logind[1286]: Removed session 4. Oct 2 19:18:40.322036 systemd[1]: Started sshd@2-10.200.8.48:22-10.200.12.6:34072.service. Oct 2 19:18:40.944021 sshd[1586]: Accepted publickey for core from 10.200.12.6 port 34072 ssh2: RSA SHA256:gmG02UXHBKapD9vqiBZ3w7SUJvWJJQwqYxETXcCINW8 Oct 2 19:18:40.945673 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:40.950643 systemd[1]: Started session-5.scope. Oct 2 19:18:40.951086 systemd-logind[1286]: New session 5 of user core. Oct 2 19:18:41.874728 sshd[1586]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:41.877567 systemd[1]: sshd@2-10.200.8.48:22-10.200.12.6:34072.service: Deactivated successfully. Oct 2 19:18:41.878610 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:18:41.879416 systemd-logind[1286]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:18:41.880361 systemd-logind[1286]: Removed session 5. Oct 2 19:18:41.976308 systemd[1]: Started sshd@3-10.200.8.48:22-10.200.12.6:34088.service. Oct 2 19:18:42.624234 sshd[1592]: Accepted publickey for core from 10.200.12.6 port 34088 ssh2: RSA SHA256:gmG02UXHBKapD9vqiBZ3w7SUJvWJJQwqYxETXcCINW8 Oct 2 19:18:42.625896 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:42.630904 systemd[1]: Started session-6.scope. Oct 2 19:18:42.631386 systemd-logind[1286]: New session 6 of user core. Oct 2 19:18:43.245166 sshd[1592]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:43.248513 systemd[1]: sshd@3-10.200.8.48:22-10.200.12.6:34088.service: Deactivated successfully. Oct 2 19:18:43.249494 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:18:43.250267 systemd-logind[1286]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:18:43.251053 systemd-logind[1286]: Removed session 6. Oct 2 19:18:43.357619 systemd[1]: Started sshd@4-10.200.8.48:22-10.200.12.6:34104.service. Oct 2 19:18:44.010939 sshd[1601]: Accepted publickey for core from 10.200.12.6 port 34104 ssh2: RSA SHA256:gmG02UXHBKapD9vqiBZ3w7SUJvWJJQwqYxETXcCINW8 Oct 2 19:18:44.012605 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:44.018042 systemd-logind[1286]: New session 7 of user core. Oct 2 19:18:44.018319 systemd[1]: Started session-7.scope. Oct 2 19:18:44.600594 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:18:44.600942 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:18:44.621365 dbus-daemon[1272]: \xd0}d{2V: received setenforce notice (enforcing=-423889168) Oct 2 19:18:44.623617 sudo[1604]: pam_unix(sudo:session): session closed for user root Oct 2 19:18:44.740475 sshd[1601]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:44.744071 systemd[1]: sshd@4-10.200.8.48:22-10.200.12.6:34104.service: Deactivated successfully. Oct 2 19:18:44.745213 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:18:44.746008 systemd-logind[1286]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:18:44.747011 systemd-logind[1286]: Removed session 7. Oct 2 19:18:44.847166 systemd[1]: Started sshd@5-10.200.8.48:22-10.200.12.6:34116.service. Oct 2 19:18:45.475790 sshd[1608]: Accepted publickey for core from 10.200.12.6 port 34116 ssh2: RSA SHA256:gmG02UXHBKapD9vqiBZ3w7SUJvWJJQwqYxETXcCINW8 Oct 2 19:18:45.477550 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:45.483353 systemd[1]: Started session-8.scope. Oct 2 19:18:45.484085 systemd-logind[1286]: New session 8 of user core. Oct 2 19:18:45.823512 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:18:45.824005 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:18:45.826804 sudo[1612]: pam_unix(sudo:session): session closed for user root Oct 2 19:18:45.831506 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:18:45.831769 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:18:45.840702 systemd[1]: Stopping audit-rules.service... Oct 2 19:18:45.841000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:18:45.845150 kernel: kauditd_printk_skb: 3 callbacks suppressed Oct 2 19:18:45.845221 kernel: audit: type=1305 audit(1696274325.841:175): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:18:45.845438 auditctl[1615]: No rules Oct 2 19:18:45.845879 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:18:45.846043 systemd[1]: Stopped audit-rules.service. Oct 2 19:18:45.847655 systemd[1]: Starting audit-rules.service... Oct 2 19:18:45.841000 audit[1615]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff266bd200 a2=420 a3=0 items=0 ppid=1 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:45.841000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:18:45.870051 augenrules[1632]: No rules Oct 2 19:18:45.872808 kernel: audit: type=1300 audit(1696274325.841:175): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff266bd200 a2=420 a3=0 items=0 ppid=1 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:45.872884 kernel: audit: type=1327 audit(1696274325.841:175): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:18:45.872910 kernel: audit: type=1131 audit(1696274325.844:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:45.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:45.871806 sudo[1611]: pam_unix(sudo:session): session closed for user root Oct 2 19:18:45.870864 systemd[1]: Finished audit-rules.service. Oct 2 19:18:45.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:45.893286 kernel: audit: type=1130 audit(1696274325.870:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:45.894142 kernel: audit: type=1106 audit(1696274325.870:178): pid=1611 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:45.870000 audit[1611]: USER_END pid=1611 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:45.870000 audit[1611]: CRED_DISP pid=1611 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:45.910133 kernel: audit: type=1104 audit(1696274325.870:179): pid=1611 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:46.077239 systemd[1]: Started sshd@6-10.200.8.48:22-10.200.12.6:34126.service. Oct 2 19:18:46.090204 kernel: audit: type=1130 audit(1696274326.076:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.48:22-10.200.12.6:34126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:46.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.48:22-10.200.12.6:34126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:46.376147 sshd[1608]: pam_unix(sshd:session): session closed for user core Oct 2 19:18:46.376000 audit[1608]: USER_END pid=1608 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:18:46.380077 systemd[1]: sshd@5-10.200.8.48:22-10.200.12.6:34116.service: Deactivated successfully. Oct 2 19:18:46.380967 systemd[1]: session-8.scope: Deactivated successfully. Oct 2 19:18:46.382167 systemd-logind[1286]: Session 8 logged out. Waiting for processes to exit. Oct 2 19:18:46.383066 systemd-logind[1286]: Removed session 8. Oct 2 19:18:46.376000 audit[1608]: CRED_DISP pid=1608 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:18:46.393129 kernel: audit: type=1106 audit(1696274326.376:181): pid=1608 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:18:46.393178 kernel: audit: type=1104 audit(1696274326.376:182): pid=1608 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:18:46.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.48:22-10.200.12.6:34116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:46.710000 audit[1637]: USER_ACCT pid=1637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:18:46.711581 sshd[1637]: Accepted publickey for core from 10.200.12.6 port 34126 ssh2: RSA SHA256:gmG02UXHBKapD9vqiBZ3w7SUJvWJJQwqYxETXcCINW8 Oct 2 19:18:46.711000 audit[1637]: CRED_ACQ pid=1637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:18:46.711000 audit[1637]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdea133410 a2=3 a3=0 items=0 ppid=1 pid=1637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:46.711000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:18:46.713323 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:18:46.718883 systemd[1]: Started session-9.scope. Oct 2 19:18:46.719360 systemd-logind[1286]: New session 9 of user core. Oct 2 19:18:46.722000 audit[1637]: USER_START pid=1637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:18:46.724000 audit[1640]: CRED_ACQ pid=1640 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:18:47.058000 audit[1641]: USER_ACCT pid=1641 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:47.058000 audit[1641]: CRED_REFR pid=1641 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:47.059296 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:18:47.059569 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:18:47.060000 audit[1641]: USER_START pid=1641 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:18:47.742224 systemd[1]: Reloading. Oct 2 19:18:47.820749 /usr/lib/systemd/system-generators/torcx-generator[1671]: time="2023-10-02T19:18:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:18:47.826210 /usr/lib/systemd/system-generators/torcx-generator[1671]: time="2023-10-02T19:18:47Z" level=info msg="torcx already run" Oct 2 19:18:47.915012 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:18:47.915034 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:18:47.931235 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:18:47.999000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:47.999000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:47.999000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:47.999000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:47.999000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:47.999000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:47.999000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:47.999000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:47.999000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:47.999000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:47.999000 audit: BPF prog-id=38 op=LOAD Oct 2 19:18:47.999000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit: BPF prog-id=39 op=LOAD Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.001000 audit: BPF prog-id=40 op=LOAD Oct 2 19:18:48.001000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:18:48.001000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:18:48.003000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.003000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.003000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.003000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.003000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.003000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.003000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.003000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.003000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.004000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.004000 audit: BPF prog-id=41 op=LOAD Oct 2 19:18:48.004000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:18:48.005000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.005000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.005000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.005000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.005000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.005000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.005000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.005000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.005000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.005000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.005000 audit: BPF prog-id=42 op=LOAD Oct 2 19:18:48.005000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:18:48.007000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.007000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.007000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.007000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.007000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.007000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.007000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.007000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.007000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit: BPF prog-id=43 op=LOAD Oct 2 19:18:48.008000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit: BPF prog-id=44 op=LOAD Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.008000 audit: BPF prog-id=45 op=LOAD Oct 2 19:18:48.008000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:18:48.008000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit: BPF prog-id=46 op=LOAD Oct 2 19:18:48.009000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit: BPF prog-id=47 op=LOAD Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.009000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit: BPF prog-id=48 op=LOAD Oct 2 19:18:48.010000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:18:48.010000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.010000 audit: BPF prog-id=49 op=LOAD Oct 2 19:18:48.010000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:18:48.011000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.011000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.011000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.011000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.011000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.011000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.011000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.011000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.011000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.011000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.011000 audit: BPF prog-id=50 op=LOAD Oct 2 19:18:48.011000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit: BPF prog-id=51 op=LOAD Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:48.012000 audit: BPF prog-id=52 op=LOAD Oct 2 19:18:48.012000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:18:48.012000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:18:48.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:48.023149 systemd[1]: Started kubelet.service. Oct 2 19:18:48.041820 systemd[1]: Starting coreos-metadata.service... Oct 2 19:18:48.109141 coreos-metadata[1739]: Oct 02 19:18:48.107 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 2 19:18:48.109987 kubelet[1732]: E1002 19:18:48.109940 1732 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:18:48.110288 coreos-metadata[1739]: Oct 02 19:18:48.110 INFO Fetch successful Oct 2 19:18:48.110358 coreos-metadata[1739]: Oct 02 19:18:48.110 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 2 19:18:48.111839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:18:48.112014 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:18:48.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:18:48.113135 coreos-metadata[1739]: Oct 02 19:18:48.113 INFO Fetch successful Oct 2 19:18:48.113343 coreos-metadata[1739]: Oct 02 19:18:48.113 INFO Fetching http://168.63.129.16/machine/d6c8f2cc-9c61-4b4c-a784-3b139ed47e24/76987da7%2Dad28%2D4702%2Da9af%2D1524ed08a746.%5Fci%2D3510.3.0%2Da%2Deb10099fa4?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 2 19:18:48.114987 coreos-metadata[1739]: Oct 02 19:18:48.114 INFO Fetch successful Oct 2 19:18:48.148057 coreos-metadata[1739]: Oct 02 19:18:48.148 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 2 19:18:48.161234 coreos-metadata[1739]: Oct 02 19:18:48.161 INFO Fetch successful Oct 2 19:18:48.170295 systemd[1]: Finished coreos-metadata.service. Oct 2 19:18:48.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:51.171743 systemd[1]: Stopped kubelet.service. Oct 2 19:18:51.182591 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 2 19:18:51.182693 kernel: audit: type=1130 audit(1696274331.170:367): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:51.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:51.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:51.197513 kernel: audit: type=1131 audit(1696274331.170:368): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:51.203242 systemd[1]: Reloading. Oct 2 19:18:51.289363 /usr/lib/systemd/system-generators/torcx-generator[1798]: time="2023-10-02T19:18:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:18:51.289400 /usr/lib/systemd/system-generators/torcx-generator[1798]: time="2023-10-02T19:18:51Z" level=info msg="torcx already run" Oct 2 19:18:51.377596 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:18:51.377616 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:18:51.393912 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:18:51.463000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.478221 kernel: audit: type=1400 audit(1696274331.463:369): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.463000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.504102 kernel: audit: type=1400 audit(1696274331.463:370): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.504247 kernel: audit: type=1400 audit(1696274331.463:371): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.463000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.463000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.529834 kernel: audit: type=1400 audit(1696274331.463:372): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.529974 kernel: audit: type=1400 audit(1696274331.463:373): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.463000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.543135 kernel: audit: type=1400 audit(1696274331.463:374): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.463000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562048 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:18:51.562229 kernel: audit: type=1400 audit(1696274331.463:375): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.463000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.463000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.463000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.476000 audit: BPF prog-id=53 op=LOAD Oct 2 19:18:51.476000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:18:51.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit: BPF prog-id=54 op=LOAD Oct 2 19:18:51.490000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.490000 audit: BPF prog-id=55 op=LOAD Oct 2 19:18:51.490000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:18:51.490000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:18:51.492000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.492000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.492000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.492000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.502000 audit: BPF prog-id=56 op=LOAD Oct 2 19:18:51.503000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:18:51.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.528000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.528000 audit: BPF prog-id=57 op=LOAD Oct 2 19:18:51.528000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:18:51.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.542000 audit: BPF prog-id=58 op=LOAD Oct 2 19:18:51.542000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:18:51.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.562000 audit: BPF prog-id=61 op=LOAD Oct 2 19:18:51.562000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit: BPF prog-id=62 op=LOAD Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit: BPF prog-id=63 op=LOAD Oct 2 19:18:51.563000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:18:51.563000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.563000 audit: BPF prog-id=64 op=LOAD Oct 2 19:18:51.563000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:18:51.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit: BPF prog-id=65 op=LOAD Oct 2 19:18:51.565000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit: BPF prog-id=66 op=LOAD Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.565000 audit: BPF prog-id=67 op=LOAD Oct 2 19:18:51.565000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:18:51.583838 systemd[1]: Started kubelet.service. Oct 2 19:18:51.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:18:51.635560 kubelet[1860]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:18:51.635896 kubelet[1860]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:18:51.635934 kubelet[1860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:18:51.636076 kubelet[1860]: I1002 19:18:51.636050 1860 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:18:51.637563 kubelet[1860]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:18:51.637651 kubelet[1860]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:18:51.637687 kubelet[1860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:18:51.960345 kubelet[1860]: I1002 19:18:51.960310 1860 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:18:51.960345 kubelet[1860]: I1002 19:18:51.960337 1860 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:18:51.960621 kubelet[1860]: I1002 19:18:51.960601 1860 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:18:51.967145 kubelet[1860]: I1002 19:18:51.967097 1860 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:18:51.969471 kubelet[1860]: I1002 19:18:51.969440 1860 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:18:51.969708 kubelet[1860]: I1002 19:18:51.969694 1860 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:18:51.969795 kubelet[1860]: I1002 19:18:51.969780 1860 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:18:51.969934 kubelet[1860]: I1002 19:18:51.969808 1860 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:18:51.969934 kubelet[1860]: I1002 19:18:51.969824 1860 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:18:51.970020 kubelet[1860]: I1002 19:18:51.969946 1860 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:18:51.973406 kubelet[1860]: I1002 19:18:51.973387 1860 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:18:51.973406 kubelet[1860]: I1002 19:18:51.973408 1860 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:18:51.973551 kubelet[1860]: I1002 19:18:51.973446 1860 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:18:51.973551 kubelet[1860]: I1002 19:18:51.973461 1860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:18:51.974069 kubelet[1860]: E1002 19:18:51.974049 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:51.974372 kubelet[1860]: E1002 19:18:51.974149 1860 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:51.974473 kubelet[1860]: I1002 19:18:51.974444 1860 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:18:51.974805 kubelet[1860]: W1002 19:18:51.974788 1860 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:18:51.975319 kubelet[1860]: I1002 19:18:51.975300 1860 server.go:1175] "Started kubelet" Oct 2 19:18:51.976240 kubelet[1860]: I1002 19:18:51.976226 1860 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:18:51.976964 kubelet[1860]: I1002 19:18:51.976940 1860 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:18:51.977000 audit[1860]: AVC avc: denied { mac_admin } for pid=1860 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.977000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:18:51.977000 audit[1860]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b82750 a1=c0006bbe90 a2=c000b82720 a3=25 items=0 ppid=1 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.977000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:18:51.977000 audit[1860]: AVC avc: denied { mac_admin } for pid=1860 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.977000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:18:51.977000 audit[1860]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b94160 a1=c0006bbea8 a2=c000b827e0 a3=25 items=0 ppid=1 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.977000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:18:51.978818 kubelet[1860]: I1002 19:18:51.978382 1860 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:18:51.978818 kubelet[1860]: I1002 19:18:51.978427 1860 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:18:51.978818 kubelet[1860]: I1002 19:18:51.978492 1860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:18:51.980000 audit[1871]: NETFILTER_CFG table=mangle:6 family=2 entries=2 op=nft_register_chain pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:51.980000 audit[1871]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffddef72e0 a2=0 a3=7fffddef72cc items=0 ppid=1860 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.980000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:18:51.982991 kubelet[1860]: I1002 19:18:51.982966 1860 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:18:51.982000 audit[1872]: NETFILTER_CFG table=filter:7 family=2 entries=2 op=nft_register_chain pid=1872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:51.982000 audit[1872]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe2bfa09d0 a2=0 a3=7ffe2bfa09bc items=0 ppid=1860 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:18:51.984371 kubelet[1860]: I1002 19:18:51.984345 1860 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:18:51.985000 audit[1874]: NETFILTER_CFG table=filter:8 family=2 entries=2 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:51.985000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcf3b616b0 a2=0 a3=7ffcf3b6169c items=0 ppid=1860 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.985000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:18:51.986000 audit[1876]: NETFILTER_CFG table=filter:9 family=2 entries=2 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:51.986000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffffeb28ed0 a2=0 a3=7ffffeb28ebc items=0 ppid=1860 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:18:51.988915 kubelet[1860]: E1002 19:18:51.988894 1860 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:18:51.989031 kubelet[1860]: E1002 19:18:51.989019 1860 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:18:51.989451 kubelet[1860]: E1002 19:18:51.989436 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:52.001193 kubelet[1860]: W1002 19:18:52.001152 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:18:52.007595 kubelet[1860]: E1002 19:18:52.007571 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:18:52.007732 kubelet[1860]: E1002 19:18:52.006955 1860 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:18:52.007797 kubelet[1860]: E1002 19:18:52.006998 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a608486ffc72f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 51, 975272239, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 51, 975272239, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.008166 kubelet[1860]: W1002 19:18:52.007178 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:18:52.008300 kubelet[1860]: E1002 19:18:52.008289 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:18:52.008373 kubelet[1860]: W1002 19:18:52.007221 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:18:52.008472 kubelet[1860]: E1002 19:18:52.008431 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:18:52.011218 kubelet[1860]: E1002 19:18:52.011135 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a608487d15673", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 51, 989005939, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 51, 989005939, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.016969 kubelet[1860]: I1002 19:18:52.016953 1860 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:18:52.017090 kubelet[1860]: I1002 19:18:52.017081 1860 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:18:52.017183 kubelet[1860]: I1002 19:18:52.017173 1860 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:18:52.017365 kubelet[1860]: E1002 19:18:52.017256 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e77af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16080815, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16080815, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.017896 kubelet[1860]: E1002 19:18:52.017836 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e91db", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16087515, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16087515, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.018426 kubelet[1860]: E1002 19:18:52.018371 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896ea4ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16092415, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16092415, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.023504 kubelet[1860]: I1002 19:18:52.023490 1860 policy_none.go:49] "None policy: Start" Oct 2 19:18:52.024075 kubelet[1860]: I1002 19:18:52.024052 1860 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:18:52.024205 kubelet[1860]: I1002 19:18:52.024194 1860 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:18:52.023000 audit[1884]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.023000 audit[1884]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe2b4d0470 a2=0 a3=7ffe2b4d045c items=0 ppid=1860 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.023000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:18:52.024000 audit[1885]: NETFILTER_CFG table=nat:11 family=2 entries=2 op=nft_register_chain pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.024000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffefeb4cd90 a2=0 a3=7ffefeb4cd7c items=0 ppid=1860 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.024000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:18:52.032501 systemd[1]: Created slice kubepods.slice. Oct 2 19:18:52.036476 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:18:52.039461 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:18:52.044694 kubelet[1860]: I1002 19:18:52.044677 1860 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:18:52.043000 audit[1860]: AVC avc: denied { mac_admin } for pid=1860 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:52.043000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:18:52.043000 audit[1860]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ec5fb0 a1=c000be99b0 a2=c000ec5f80 a3=25 items=0 ppid=1 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.043000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:18:52.045181 kubelet[1860]: I1002 19:18:52.045161 1860 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:18:52.045466 kubelet[1860]: I1002 19:18:52.045376 1860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:18:52.046699 kubelet[1860]: E1002 19:18:52.046680 1860 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.48\" not found" Oct 2 19:18:52.048588 kubelet[1860]: E1002 19:18:52.048443 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a60848b46d0af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 47036591, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 47036591, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.058000 audit[1890]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.058000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff7690dd20 a2=0 a3=7fff7690dd0c items=0 ppid=1860 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:18:52.075000 audit[1893]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.075000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffc34154340 a2=0 a3=7ffc3415432c items=0 ppid=1860 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:18:52.076000 audit[1894]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.076000 audit[1894]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd01d54fe0 a2=0 a3=7ffd01d54fcc items=0 ppid=1860 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:18:52.078000 audit[1895]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.078000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc863527e0 a2=0 a3=7ffc863527cc items=0 ppid=1860 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:18:52.080000 audit[1897]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.080000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe3d4e24b0 a2=0 a3=7ffe3d4e249c items=0 ppid=1860 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:18:52.083458 kubelet[1860]: E1002 19:18:52.083432 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:52.084184 kubelet[1860]: I1002 19:18:52.084170 1860 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Oct 2 19:18:52.085454 kubelet[1860]: E1002 19:18:52.085438 1860 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Oct 2 19:18:52.085673 kubelet[1860]: E1002 19:18:52.085621 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e77af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16080815, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 84100340, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e77af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.086506 kubelet[1860]: E1002 19:18:52.086457 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e91db", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16087515, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 84105340, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e91db" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.087294 kubelet[1860]: E1002 19:18:52.087245 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896ea4ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16092415, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 84110240, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896ea4ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.082000 audit[1899]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.082000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffce1645f30 a2=0 a3=7ffce1645f1c items=0 ppid=1860 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.082000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:18:52.155000 audit[1902]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_rule pid=1902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.155000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffe6b4449c0 a2=0 a3=7ffe6b4449ac items=0 ppid=1860 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:18:52.157000 audit[1904]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_rule pid=1904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.157000 audit[1904]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffffa6ea290 a2=0 a3=7ffffa6ea27c items=0 ppid=1860 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:18:52.184204 kubelet[1860]: E1002 19:18:52.184170 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:52.183000 audit[1907]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_rule pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.183000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7fff17c0b4a0 a2=0 a3=7fff17c0b48c items=0 ppid=1860 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:18:52.185047 kubelet[1860]: I1002 19:18:52.185025 1860 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:18:52.185000 audit[1908]: NETFILTER_CFG table=mangle:21 family=10 entries=2 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.185000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd7561f970 a2=0 a3=7ffd7561f95c items=0 ppid=1860 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.185000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:18:52.185000 audit[1909]: NETFILTER_CFG table=mangle:22 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.185000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff543a34a0 a2=0 a3=7fff543a348c items=0 ppid=1860 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:18:52.186000 audit[1910]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.186000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffac7f2d70 a2=0 a3=7fffac7f2d5c items=0 ppid=1860 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:18:52.187000 audit[1911]: NETFILTER_CFG table=nat:24 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.187000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe49e7f5b0 a2=0 a3=7ffe49e7f59c items=0 ppid=1860 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:18:52.188000 audit[1912]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:52.188000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca7ded760 a2=0 a3=7ffca7ded74c items=0 ppid=1860 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:18:52.189000 audit[1914]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_rule pid=1914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.189000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcdc3fca80 a2=0 a3=7ffcdc3fca6c items=0 ppid=1860 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:18:52.190000 audit[1915]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.190000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffdf6491fb0 a2=0 a3=7ffdf6491f9c items=0 ppid=1860 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:18:52.193000 audit[1917]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=1917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.193000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffc520fa4b0 a2=0 a3=7ffc520fa49c items=0 ppid=1860 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:18:52.194000 audit[1918]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.194000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb02032a0 a2=0 a3=7ffeb020328c items=0 ppid=1860 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.194000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:18:52.195000 audit[1919]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.195000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf0a4b7d0 a2=0 a3=7ffdf0a4b7bc items=0 ppid=1860 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:18:52.197000 audit[1921]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.197000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd5a96d390 a2=0 a3=7ffd5a96d37c items=0 ppid=1860 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:18:52.199000 audit[1923]: NETFILTER_CFG table=nat:32 family=10 entries=2 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.199000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcfe470860 a2=0 a3=7ffcfe47084c items=0 ppid=1860 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:18:52.201000 audit[1925]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_rule pid=1925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.201000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd92748840 a2=0 a3=7ffd9274882c items=0 ppid=1860 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:18:52.203000 audit[1927]: NETFILTER_CFG table=nat:34 family=10 entries=1 op=nft_register_rule pid=1927 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.203000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffd141e0960 a2=0 a3=7ffd141e094c items=0 ppid=1860 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.203000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:18:52.206000 audit[1929]: NETFILTER_CFG table=nat:35 family=10 entries=1 op=nft_register_rule pid=1929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.206000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7fffbe5aeab0 a2=0 a3=7fffbe5aea9c items=0 ppid=1860 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:18:52.208655 kubelet[1860]: I1002 19:18:52.208633 1860 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:18:52.208756 kubelet[1860]: I1002 19:18:52.208665 1860 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:18:52.208756 kubelet[1860]: I1002 19:18:52.208687 1860 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:18:52.208756 kubelet[1860]: E1002 19:18:52.208735 1860 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:18:52.209534 kubelet[1860]: E1002 19:18:52.209511 1860 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:18:52.209000 audit[1930]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.209000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1d453540 a2=0 a3=7ffd1d45352c items=0 ppid=1860 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.209000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:18:52.211000 audit[1931]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.211000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeee145b30 a2=0 a3=7ffeee145b1c items=0 ppid=1860 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:18:52.213000 audit[1932]: NETFILTER_CFG table=filter:38 family=10 entries=1 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:52.213000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffac1cfa90 a2=0 a3=7fffac1cfa7c items=0 ppid=1860 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:52.213000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:18:52.215947 kubelet[1860]: W1002 19:18:52.211495 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:18:52.215947 kubelet[1860]: E1002 19:18:52.211521 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:18:52.285054 kubelet[1860]: E1002 19:18:52.284991 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:52.286911 kubelet[1860]: I1002 19:18:52.286876 1860 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Oct 2 19:18:52.288088 kubelet[1860]: E1002 19:18:52.288061 1860 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Oct 2 19:18:52.288223 kubelet[1860]: E1002 19:18:52.288020 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e77af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16080815, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 286830478, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e77af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.288997 kubelet[1860]: E1002 19:18:52.288924 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e91db", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16087515, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 286843578, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e91db" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.377773 kubelet[1860]: E1002 19:18:52.377665 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896ea4ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16092415, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 286848678, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896ea4ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.385909 kubelet[1860]: E1002 19:18:52.385869 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:52.486726 kubelet[1860]: E1002 19:18:52.486597 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:52.587167 kubelet[1860]: E1002 19:18:52.587091 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:52.612302 kubelet[1860]: E1002 19:18:52.612254 1860 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:18:52.687746 kubelet[1860]: E1002 19:18:52.687686 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:52.689561 kubelet[1860]: I1002 19:18:52.689536 1860 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Oct 2 19:18:52.690872 kubelet[1860]: E1002 19:18:52.690849 1860 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Oct 2 19:18:52.691015 kubelet[1860]: E1002 19:18:52.690842 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e77af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16080815, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 689484473, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e77af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.778130 kubelet[1860]: E1002 19:18:52.777835 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e91db", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16087515, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 689496373, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e91db" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.788184 kubelet[1860]: E1002 19:18:52.788148 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:52.888807 kubelet[1860]: E1002 19:18:52.888761 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:52.975478 kubelet[1860]: E1002 19:18:52.975426 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:52.976863 kubelet[1860]: W1002 19:18:52.976831 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:18:52.976983 kubelet[1860]: E1002 19:18:52.976869 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:18:52.977439 kubelet[1860]: E1002 19:18:52.977353 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896ea4ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16092415, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 689500673, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896ea4ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:52.989723 kubelet[1860]: E1002 19:18:52.989681 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:53.090849 kubelet[1860]: E1002 19:18:53.090714 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:53.191228 kubelet[1860]: E1002 19:18:53.191183 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:53.292182 kubelet[1860]: E1002 19:18:53.292100 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:53.391916 kubelet[1860]: W1002 19:18:53.391791 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:18:53.391916 kubelet[1860]: E1002 19:18:53.391832 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:18:53.392862 kubelet[1860]: E1002 19:18:53.392834 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:53.414237 kubelet[1860]: E1002 19:18:53.414186 1860 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:18:53.463610 kubelet[1860]: W1002 19:18:53.463573 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:18:53.463610 kubelet[1860]: E1002 19:18:53.463609 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:18:53.491891 kubelet[1860]: I1002 19:18:53.491849 1860 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Oct 2 19:18:53.493070 kubelet[1860]: E1002 19:18:53.493015 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:53.493226 kubelet[1860]: E1002 19:18:53.493210 1860 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Oct 2 19:18:53.493349 kubelet[1860]: E1002 19:18:53.493256 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e77af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16080815, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 53, 491796399, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e77af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:53.494317 kubelet[1860]: E1002 19:18:53.494245 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e91db", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16087515, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 53, 491809100, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e91db" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:53.503083 kubelet[1860]: W1002 19:18:53.503060 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:18:53.503201 kubelet[1860]: E1002 19:18:53.503088 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:18:53.577498 kubelet[1860]: E1002 19:18:53.577396 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896ea4ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16092415, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 53, 491813500, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896ea4ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:53.593842 kubelet[1860]: E1002 19:18:53.593784 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:53.694312 kubelet[1860]: E1002 19:18:53.694266 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:53.794866 kubelet[1860]: E1002 19:18:53.794813 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:53.895472 kubelet[1860]: E1002 19:18:53.895419 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:53.976328 kubelet[1860]: E1002 19:18:53.976190 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.995580 kubelet[1860]: E1002 19:18:53.995530 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:54.096321 kubelet[1860]: E1002 19:18:54.096265 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:54.196765 kubelet[1860]: E1002 19:18:54.196709 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:54.297830 kubelet[1860]: E1002 19:18:54.297696 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:54.398219 kubelet[1860]: E1002 19:18:54.398174 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:54.499162 kubelet[1860]: E1002 19:18:54.499091 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:54.599841 kubelet[1860]: E1002 19:18:54.599705 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:54.700310 kubelet[1860]: E1002 19:18:54.700257 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:54.800822 kubelet[1860]: E1002 19:18:54.800767 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:54.901501 kubelet[1860]: E1002 19:18:54.901365 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:54.977207 kubelet[1860]: E1002 19:18:54.977145 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:55.001538 kubelet[1860]: E1002 19:18:55.001492 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:55.015805 kubelet[1860]: E1002 19:18:55.015758 1860 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:18:55.094287 kubelet[1860]: I1002 19:18:55.094211 1860 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Oct 2 19:18:55.095444 kubelet[1860]: E1002 19:18:55.095408 1860 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Oct 2 19:18:55.095603 kubelet[1860]: E1002 19:18:55.095401 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e77af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16080815, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 55, 94154495, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e77af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:55.096357 kubelet[1860]: E1002 19:18:55.096278 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e91db", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16087515, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 55, 94168195, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e91db" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:55.097160 kubelet[1860]: E1002 19:18:55.097065 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896ea4ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16092415, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 55, 94173695, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896ea4ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:55.102237 kubelet[1860]: E1002 19:18:55.102216 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:55.202696 kubelet[1860]: E1002 19:18:55.202638 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:55.303414 kubelet[1860]: E1002 19:18:55.303353 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:55.403996 kubelet[1860]: E1002 19:18:55.403940 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:55.504910 kubelet[1860]: E1002 19:18:55.504775 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:55.605308 kubelet[1860]: E1002 19:18:55.605251 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:55.639719 kubelet[1860]: W1002 19:18:55.639681 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:18:55.639719 kubelet[1860]: E1002 19:18:55.639718 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:18:55.655939 kubelet[1860]: W1002 19:18:55.655900 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:18:55.655939 kubelet[1860]: E1002 19:18:55.655940 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:18:55.705398 kubelet[1860]: E1002 19:18:55.705347 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:55.806002 kubelet[1860]: E1002 19:18:55.805864 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:55.906441 kubelet[1860]: E1002 19:18:55.906390 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:55.978227 kubelet[1860]: E1002 19:18:55.978169 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:56.006560 kubelet[1860]: E1002 19:18:56.006498 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:56.043586 kubelet[1860]: W1002 19:18:56.043543 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:18:56.043586 kubelet[1860]: E1002 19:18:56.043585 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.48" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:18:56.107286 kubelet[1860]: E1002 19:18:56.107154 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:56.142722 kubelet[1860]: W1002 19:18:56.142681 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:18:56.142722 kubelet[1860]: E1002 19:18:56.142722 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:18:56.207673 kubelet[1860]: E1002 19:18:56.207623 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:56.308561 kubelet[1860]: E1002 19:18:56.308503 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:56.409209 kubelet[1860]: E1002 19:18:56.409057 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:56.509922 kubelet[1860]: E1002 19:18:56.509874 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:56.610449 kubelet[1860]: E1002 19:18:56.610393 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:56.711260 kubelet[1860]: E1002 19:18:56.711202 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:56.811820 kubelet[1860]: E1002 19:18:56.811761 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:56.912378 kubelet[1860]: E1002 19:18:56.912322 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:56.979184 kubelet[1860]: E1002 19:18:56.979035 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:57.012688 kubelet[1860]: E1002 19:18:57.012641 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:57.046583 kubelet[1860]: E1002 19:18:57.046545 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:57.113253 kubelet[1860]: E1002 19:18:57.113204 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:57.214105 kubelet[1860]: E1002 19:18:57.214051 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:57.314916 kubelet[1860]: E1002 19:18:57.314775 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:57.415271 kubelet[1860]: E1002 19:18:57.415223 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:57.516170 kubelet[1860]: E1002 19:18:57.516091 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:57.616759 kubelet[1860]: E1002 19:18:57.616620 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:57.717205 kubelet[1860]: E1002 19:18:57.717154 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:57.817792 kubelet[1860]: E1002 19:18:57.817735 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:57.918472 kubelet[1860]: E1002 19:18:57.918339 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:57.979997 kubelet[1860]: E1002 19:18:57.979941 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:58.018465 kubelet[1860]: E1002 19:18:58.018419 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:58.119369 kubelet[1860]: E1002 19:18:58.119313 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:58.216787 kubelet[1860]: E1002 19:18:58.216693 1860 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.200.8.48" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:18:58.219866 kubelet[1860]: E1002 19:18:58.219838 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:58.297411 kubelet[1860]: I1002 19:18:58.297139 1860 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Oct 2 19:18:58.298484 kubelet[1860]: E1002 19:18:58.298453 1860 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.48" Oct 2 19:18:58.298710 kubelet[1860]: E1002 19:18:58.298466 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e77af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.48 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16080815, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 58, 297051330, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e77af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:58.299696 kubelet[1860]: E1002 19:18:58.299626 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896e91db", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.48 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16087515, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 58, 297068731, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896e91db" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:58.300680 kubelet[1860]: E1002 19:18:58.300616 1860 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.48.178a6084896ea4ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.48", UID:"10.200.8.48", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.48 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.48"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 18, 52, 16092415, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 18, 58, 297073331, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.48.178a6084896ea4ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:18:58.320918 kubelet[1860]: E1002 19:18:58.320866 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:58.421497 kubelet[1860]: E1002 19:18:58.421444 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:58.522425 kubelet[1860]: E1002 19:18:58.522295 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:58.622835 kubelet[1860]: E1002 19:18:58.622777 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:58.723439 kubelet[1860]: E1002 19:18:58.723380 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:58.824096 kubelet[1860]: E1002 19:18:58.823956 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:58.924531 kubelet[1860]: E1002 19:18:58.924470 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:58.980137 kubelet[1860]: E1002 19:18:58.980068 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:59.024657 kubelet[1860]: E1002 19:18:59.024607 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:59.125554 kubelet[1860]: E1002 19:18:59.125417 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:59.226475 kubelet[1860]: E1002 19:18:59.226430 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:59.327047 kubelet[1860]: E1002 19:18:59.326989 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:59.427798 kubelet[1860]: E1002 19:18:59.427660 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:59.528553 kubelet[1860]: E1002 19:18:59.528503 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:59.613177 kubelet[1860]: W1002 19:18:59.613138 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:18:59.613177 kubelet[1860]: E1002 19:18:59.613174 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:18:59.629344 kubelet[1860]: E1002 19:18:59.629283 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:59.729786 kubelet[1860]: E1002 19:18:59.729729 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:59.830386 kubelet[1860]: E1002 19:18:59.830330 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:59.931011 kubelet[1860]: E1002 19:18:59.930956 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:18:59.980685 kubelet[1860]: E1002 19:18:59.980542 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:00.031900 kubelet[1860]: E1002 19:19:00.031843 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:00.132809 kubelet[1860]: E1002 19:19:00.132755 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:00.233783 kubelet[1860]: E1002 19:19:00.233655 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:00.334203 kubelet[1860]: E1002 19:19:00.334151 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:00.434777 kubelet[1860]: E1002 19:19:00.434720 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:00.535794 kubelet[1860]: E1002 19:19:00.535660 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:00.636138 kubelet[1860]: E1002 19:19:00.636084 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:00.736687 kubelet[1860]: E1002 19:19:00.736633 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:00.837330 kubelet[1860]: E1002 19:19:00.837194 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:00.937764 kubelet[1860]: E1002 19:19:00.937697 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:00.981426 kubelet[1860]: E1002 19:19:00.981369 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:01.038263 kubelet[1860]: E1002 19:19:01.038205 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:01.139163 kubelet[1860]: E1002 19:19:01.139016 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:01.239218 kubelet[1860]: E1002 19:19:01.239156 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:01.334820 kubelet[1860]: W1002 19:19:01.334777 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:19:01.334820 kubelet[1860]: E1002 19:19:01.334819 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:19:01.339871 kubelet[1860]: E1002 19:19:01.339837 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:01.440375 kubelet[1860]: E1002 19:19:01.440333 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:01.541334 kubelet[1860]: E1002 19:19:01.541283 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:01.556594 kubelet[1860]: W1002 19:19:01.556558 1860 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:19:01.556594 kubelet[1860]: E1002 19:19:01.556598 1860 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:19:01.642436 kubelet[1860]: E1002 19:19:01.642386 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:01.743182 kubelet[1860]: E1002 19:19:01.743034 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:01.843725 kubelet[1860]: E1002 19:19:01.843670 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:01.944402 kubelet[1860]: E1002 19:19:01.944348 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:01.966882 kubelet[1860]: I1002 19:19:01.966823 1860 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:19:01.982361 kubelet[1860]: E1002 19:19:01.982304 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:02.045482 kubelet[1860]: E1002 19:19:02.045348 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:02.047616 kubelet[1860]: E1002 19:19:02.047496 1860 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.48\" not found" Oct 2 19:19:02.048213 kubelet[1860]: E1002 19:19:02.048178 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:02.145917 kubelet[1860]: E1002 19:19:02.145859 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:02.246234 kubelet[1860]: E1002 19:19:02.246171 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:02.346991 kubelet[1860]: E1002 19:19:02.346622 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:02.365389 kubelet[1860]: E1002 19:19:02.365340 1860 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.48" not found Oct 2 19:19:02.447546 kubelet[1860]: E1002 19:19:02.447496 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:02.548518 kubelet[1860]: E1002 19:19:02.548477 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:02.649153 kubelet[1860]: E1002 19:19:02.649010 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:02.749687 kubelet[1860]: E1002 19:19:02.749631 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:02.850555 kubelet[1860]: E1002 19:19:02.850496 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:02.951476 kubelet[1860]: E1002 19:19:02.951423 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:02.981983 kubelet[1860]: I1002 19:19:02.981922 1860 apiserver.go:52] "Watching apiserver" Oct 2 19:19:02.983043 kubelet[1860]: E1002 19:19:02.983016 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:03.052484 kubelet[1860]: E1002 19:19:03.052429 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:03.153186 kubelet[1860]: E1002 19:19:03.153130 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:03.254236 kubelet[1860]: E1002 19:19:03.254104 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:03.353730 kubelet[1860]: I1002 19:19:03.353671 1860 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:19:03.354802 kubelet[1860]: E1002 19:19:03.354775 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:03.455290 kubelet[1860]: E1002 19:19:03.455238 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:03.556451 kubelet[1860]: E1002 19:19:03.556325 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:03.657207 kubelet[1860]: E1002 19:19:03.657153 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:03.757680 kubelet[1860]: E1002 19:19:03.757624 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:03.810782 kubelet[1860]: E1002 19:19:03.810661 1860 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.48" not found Oct 2 19:19:03.857994 kubelet[1860]: E1002 19:19:03.857944 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:03.958102 kubelet[1860]: E1002 19:19:03.958048 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:03.983505 kubelet[1860]: E1002 19:19:03.983444 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:04.058763 kubelet[1860]: E1002 19:19:04.058711 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:04.159712 kubelet[1860]: E1002 19:19:04.159586 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:04.260160 kubelet[1860]: E1002 19:19:04.260092 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:04.361021 kubelet[1860]: E1002 19:19:04.360965 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:04.461378 kubelet[1860]: E1002 19:19:04.461327 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:04.562411 kubelet[1860]: E1002 19:19:04.562361 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:04.621561 kubelet[1860]: E1002 19:19:04.621523 1860 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.48\" not found" node="10.200.8.48" Oct 2 19:19:04.662796 kubelet[1860]: E1002 19:19:04.662740 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:04.700492 kubelet[1860]: I1002 19:19:04.700452 1860 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.48" Oct 2 19:19:04.763787 kubelet[1860]: E1002 19:19:04.763648 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:04.864329 kubelet[1860]: E1002 19:19:04.864275 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:04.965196 kubelet[1860]: E1002 19:19:04.965142 1860 kubelet.go:2448] "Error getting node" err="node \"10.200.8.48\" not found" Oct 2 19:19:04.984649 kubelet[1860]: E1002 19:19:04.984593 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:05.011180 kubelet[1860]: I1002 19:19:05.011107 1860 kubelet_node_status.go:73] "Successfully registered node" node="10.200.8.48" Oct 2 19:19:05.037941 kubelet[1860]: I1002 19:19:05.037365 1860 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:19:05.042825 systemd[1]: Created slice kubepods-besteffort-pod7c242d3a_32bd_471b_9380_31ec106cc3ba.slice. Oct 2 19:19:05.053982 kubelet[1860]: I1002 19:19:05.053954 1860 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:19:05.058225 systemd[1]: Created slice kubepods-burstable-pod4993a1bc_d12d_4d80_8674_1449084f234b.slice. Oct 2 19:19:05.065727 kubelet[1860]: I1002 19:19:05.065703 1860 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:19:05.066308 env[1300]: time="2023-10-02T19:19:05.066269613Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:19:05.066657 kubelet[1860]: I1002 19:19:05.066504 1860 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:19:05.067004 kubelet[1860]: E1002 19:19:05.066985 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:05.100870 sudo[1641]: pam_unix(sudo:session): session closed for user root Oct 2 19:19:05.099000 audit[1641]: USER_END pid=1641 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:19:05.106094 kernel: kauditd_printk_skb: 316 callbacks suppressed Oct 2 19:19:05.106188 kernel: audit: type=1106 audit(1696274345.099:558): pid=1641 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:19:05.104000 audit[1641]: CRED_DISP pid=1641 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:19:05.136367 kernel: audit: type=1104 audit(1696274345.104:559): pid=1641 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:19:05.164862 kubelet[1860]: I1002 19:19:05.164817 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c242d3a-32bd-471b-9380-31ec106cc3ba-xtables-lock\") pod \"kube-proxy-75psk\" (UID: \"7c242d3a-32bd-471b-9380-31ec106cc3ba\") " pod="kube-system/kube-proxy-75psk" Oct 2 19:19:05.165051 kubelet[1860]: I1002 19:19:05.164883 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-etc-cni-netd\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165051 kubelet[1860]: I1002 19:19:05.164917 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4993a1bc-d12d-4d80-8674-1449084f234b-clustermesh-secrets\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165051 kubelet[1860]: I1002 19:19:05.164948 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqfcr\" (UniqueName: \"kubernetes.io/projected/4993a1bc-d12d-4d80-8674-1449084f234b-kube-api-access-mqfcr\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165051 kubelet[1860]: I1002 19:19:05.164973 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-run\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165051 kubelet[1860]: I1002 19:19:05.164999 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-bpf-maps\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165051 kubelet[1860]: I1002 19:19:05.165026 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-xtables-lock\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165364 kubelet[1860]: I1002 19:19:05.165051 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-host-proc-sys-net\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165364 kubelet[1860]: I1002 19:19:05.165104 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4993a1bc-d12d-4d80-8674-1449084f234b-hubble-tls\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165364 kubelet[1860]: I1002 19:19:05.165164 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c242d3a-32bd-471b-9380-31ec106cc3ba-lib-modules\") pod \"kube-proxy-75psk\" (UID: \"7c242d3a-32bd-471b-9380-31ec106cc3ba\") " pod="kube-system/kube-proxy-75psk" Oct 2 19:19:05.165364 kubelet[1860]: I1002 19:19:05.165196 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-cgroup\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165364 kubelet[1860]: I1002 19:19:05.165227 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-config-path\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165364 kubelet[1860]: I1002 19:19:05.165253 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cni-path\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165605 kubelet[1860]: I1002 19:19:05.165282 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-lib-modules\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165605 kubelet[1860]: I1002 19:19:05.165313 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-host-proc-sys-kernel\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.165605 kubelet[1860]: I1002 19:19:05.165354 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c242d3a-32bd-471b-9380-31ec106cc3ba-kube-proxy\") pod \"kube-proxy-75psk\" (UID: \"7c242d3a-32bd-471b-9380-31ec106cc3ba\") " pod="kube-system/kube-proxy-75psk" Oct 2 19:19:05.165605 kubelet[1860]: I1002 19:19:05.165386 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8tj6\" (UniqueName: \"kubernetes.io/projected/7c242d3a-32bd-471b-9380-31ec106cc3ba-kube-api-access-d8tj6\") pod \"kube-proxy-75psk\" (UID: \"7c242d3a-32bd-471b-9380-31ec106cc3ba\") " pod="kube-system/kube-proxy-75psk" Oct 2 19:19:05.165605 kubelet[1860]: I1002 19:19:05.165419 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-hostproc\") pod \"cilium-x2g4b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " pod="kube-system/cilium-x2g4b" Oct 2 19:19:05.206905 sshd[1637]: pam_unix(sshd:session): session closed for user core Oct 2 19:19:05.207000 audit[1637]: USER_END pid=1637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:19:05.210429 systemd[1]: sshd@6-10.200.8.48:22-10.200.12.6:34126.service: Deactivated successfully. Oct 2 19:19:05.211220 systemd[1]: session-9.scope: Deactivated successfully. Oct 2 19:19:05.212497 systemd-logind[1286]: Session 9 logged out. Waiting for processes to exit. Oct 2 19:19:05.213599 systemd-logind[1286]: Removed session 9. Oct 2 19:19:05.207000 audit[1637]: CRED_DISP pid=1637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:19:05.246131 kernel: audit: type=1106 audit(1696274345.207:560): pid=1637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:19:05.246256 kernel: audit: type=1104 audit(1696274345.207:561): pid=1637 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:19:05.246277 kernel: audit: type=1131 audit(1696274345.209:562): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.48:22-10.200.12.6:34126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:19:05.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.48:22-10.200.12.6:34126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:19:05.985479 kubelet[1860]: E1002 19:19:05.985416 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:06.208485 kubelet[1860]: I1002 19:19:06.208429 1860 request.go:690] Waited for 1.153921169s due to client-side throttling, not priority and fairness, request: GET:https://10.200.8.4:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dhubble-server-certs&limit=500&resourceVersion=0 Oct 2 19:19:06.267155 kubelet[1860]: E1002 19:19:06.266985 1860 configmap.go:197] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Oct 2 19:19:06.267410 kubelet[1860]: E1002 19:19:06.267370 1860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-config-path podName:4993a1bc-d12d-4d80-8674-1449084f234b nodeName:}" failed. No retries permitted until 2023-10-02 19:19:06.767097482 +0000 UTC m=+15.178956513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-config-path") pod "cilium-x2g4b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b") : failed to sync configmap cache: timed out waiting for the condition Oct 2 19:19:06.852703 env[1300]: time="2023-10-02T19:19:06.852653887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75psk,Uid:7c242d3a-32bd-471b-9380-31ec106cc3ba,Namespace:kube-system,Attempt:0,}" Oct 2 19:19:06.985830 kubelet[1860]: E1002 19:19:06.985780 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:07.048828 kubelet[1860]: E1002 19:19:07.048789 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:07.165063 env[1300]: time="2023-10-02T19:19:07.164726529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x2g4b,Uid:4993a1bc-d12d-4d80-8674-1449084f234b,Namespace:kube-system,Attempt:0,}" Oct 2 19:19:07.754338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824428615.mount: Deactivated successfully. Oct 2 19:19:07.786517 env[1300]: time="2023-10-02T19:19:07.786461220Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:07.790440 env[1300]: time="2023-10-02T19:19:07.790396794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:07.801522 env[1300]: time="2023-10-02T19:19:07.801474403Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:07.804289 env[1300]: time="2023-10-02T19:19:07.804251055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:07.808000 env[1300]: time="2023-10-02T19:19:07.807960625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:07.811257 env[1300]: time="2023-10-02T19:19:07.811221086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:07.815636 env[1300]: time="2023-10-02T19:19:07.815596368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:07.818337 env[1300]: time="2023-10-02T19:19:07.818302519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:07.890489 env[1300]: time="2023-10-02T19:19:07.890298773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:19:07.890489 env[1300]: time="2023-10-02T19:19:07.890350474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:19:07.890489 env[1300]: time="2023-10-02T19:19:07.890365474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:19:07.891037 env[1300]: time="2023-10-02T19:19:07.890294073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:19:07.891037 env[1300]: time="2023-10-02T19:19:07.890335274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:19:07.891037 env[1300]: time="2023-10-02T19:19:07.890350274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:19:07.891256 env[1300]: time="2023-10-02T19:19:07.891200090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1bddf7549b2e880e45a2ea3e40c32fca994fc9fe4eaaf24bfa0e274271f1504 pid=1951 runtime=io.containerd.runc.v2 Oct 2 19:19:07.891362 env[1300]: time="2023-10-02T19:19:07.891245191Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399 pid=1953 runtime=io.containerd.runc.v2 Oct 2 19:19:07.916288 systemd[1]: Started cri-containerd-d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399.scope. Oct 2 19:19:07.927415 systemd[1]: Started cri-containerd-a1bddf7549b2e880e45a2ea3e40c32fca994fc9fe4eaaf24bfa0e274271f1504.scope. Oct 2 19:19:07.940000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955145 kernel: audit: type=1400 audit(1696274347.940:563): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.940000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.975689 kernel: audit: type=1400 audit(1696274347.940:564): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.975806 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:19:07.975828 kernel: audit: type=1400 audit(1696274347.940:565): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.975854 kernel: audit: audit_lost=21 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:19:07.940000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.993978 kubelet[1860]: E1002 19:19:07.993941 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:07.940000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.940000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.940000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.940000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.940000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.940000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit: BPF prog-id=68 op=LOAD Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1953 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:07.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439626436323161323266363461643438336562666436366366306164 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1953 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:07.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439626436323161323266363461643438336562666436366366306164 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.954000 audit: BPF prog-id=69 op=LOAD Oct 2 19:19:07.954000 audit[1974]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000025640 items=0 ppid=1953 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:07.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439626436323161323266363461643438336562666436366366306164 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit: BPF prog-id=70 op=LOAD Oct 2 19:19:07.955000 audit[1974]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000025688 items=0 ppid=1953 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:07.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439626436323161323266363461643438336562666436366366306164 Oct 2 19:19:07.955000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:19:07.955000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { perfmon } for pid=1974 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit[1974]: AVC avc: denied { bpf } for pid=1974 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.955000 audit: BPF prog-id=71 op=LOAD Oct 2 19:19:07.955000 audit[1974]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000025a98 items=0 ppid=1953 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:07.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439626436323161323266363461643438336562666436366366306164 Oct 2 19:19:07.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.992000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.993000 audit[1973]: AVC avc: denied { bpf } for pid=1973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.993000 audit: BPF prog-id=73 op=LOAD Oct 2 19:19:07.993000 audit[1973]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c000288610 items=0 ppid=1951 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:07.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131626464663735343962326538383065343561326561336534306333 Oct 2 19:19:07.995000 audit[1973]: AVC avc: denied { bpf } for pid=1973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.995000 audit[1973]: AVC avc: denied { bpf } for pid=1973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.995000 audit[1973]: AVC avc: denied { perfmon } for pid=1973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.995000 audit[1973]: AVC avc: denied { perfmon } for pid=1973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.995000 audit[1973]: AVC avc: denied { perfmon } for pid=1973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.995000 audit[1973]: AVC avc: denied { perfmon } for pid=1973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.995000 audit[1973]: AVC avc: denied { perfmon } for pid=1973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.995000 audit[1973]: AVC avc: denied { bpf } for pid=1973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.995000 audit[1973]: AVC avc: denied { bpf } for pid=1973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.995000 audit: BPF prog-id=74 op=LOAD Oct 2 19:19:07.995000 audit[1973]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c000288658 items=0 ppid=1951 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:07.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131626464663735343962326538383065343561326561336534306333 Oct 2 19:19:07.996000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:19:07.996000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:19:07.996000 audit[1973]: AVC avc: denied { bpf } for pid=1973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.996000 audit[1973]: AVC avc: denied { bpf } for pid=1973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.996000 audit[1973]: AVC avc: denied { bpf } for pid=1973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.996000 audit[1973]: AVC avc: denied { perfmon } for pid=1973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.996000 audit[1973]: AVC avc: denied { perfmon } for pid=1973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.996000 audit[1973]: AVC avc: denied { perfmon } for pid=1973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.996000 audit[1973]: AVC avc: denied { perfmon } for pid=1973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.996000 audit[1973]: AVC avc: denied { perfmon } for pid=1973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.996000 audit[1973]: AVC avc: denied { bpf } for pid=1973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.996000 audit[1973]: AVC avc: denied { bpf } for pid=1973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:07.996000 audit: BPF prog-id=75 op=LOAD Oct 2 19:19:07.996000 audit[1973]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c000288a68 items=0 ppid=1951 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:07.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131626464663735343962326538383065343561326561336534306333 Oct 2 19:19:08.016599 env[1300]: time="2023-10-02T19:19:08.013527584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75psk,Uid:7c242d3a-32bd-471b-9380-31ec106cc3ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1bddf7549b2e880e45a2ea3e40c32fca994fc9fe4eaaf24bfa0e274271f1504\"" Oct 2 19:19:08.016599 env[1300]: time="2023-10-02T19:19:08.015340017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x2g4b,Uid:4993a1bc-d12d-4d80-8674-1449084f234b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\"" Oct 2 19:19:08.018246 env[1300]: time="2023-10-02T19:19:08.018207470Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:19:08.994945 kubelet[1860]: E1002 19:19:08.994888 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:09.995745 kubelet[1860]: E1002 19:19:09.995682 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:10.996890 kubelet[1860]: E1002 19:19:10.996835 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:11.974432 kubelet[1860]: E1002 19:19:11.974388 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:11.997691 kubelet[1860]: E1002 19:19:11.997656 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:12.050918 kubelet[1860]: E1002 19:19:12.050866 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:12.998343 kubelet[1860]: E1002 19:19:12.998286 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:13.998813 kubelet[1860]: E1002 19:19:13.998764 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:14.999581 kubelet[1860]: E1002 19:19:14.999545 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:15.963463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75611260.mount: Deactivated successfully. Oct 2 19:19:16.000241 kubelet[1860]: E1002 19:19:16.000191 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:17.000880 kubelet[1860]: E1002 19:19:17.000799 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:17.051844 kubelet[1860]: E1002 19:19:17.051772 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:18.001550 kubelet[1860]: E1002 19:19:18.001474 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:18.659366 env[1300]: time="2023-10-02T19:19:18.659310151Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:18.664268 env[1300]: time="2023-10-02T19:19:18.664224420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:18.668709 env[1300]: time="2023-10-02T19:19:18.668662683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:18.670315 env[1300]: time="2023-10-02T19:19:18.670274106Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\"" Oct 2 19:19:18.674034 env[1300]: time="2023-10-02T19:19:18.673914558Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:19:18.675647 env[1300]: time="2023-10-02T19:19:18.675607482Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:19:18.703167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480576789.mount: Deactivated successfully. Oct 2 19:19:18.720475 env[1300]: time="2023-10-02T19:19:18.720420319Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\"" Oct 2 19:19:18.721189 env[1300]: time="2023-10-02T19:19:18.721157730Z" level=info msg="StartContainer for \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\"" Oct 2 19:19:18.743955 systemd[1]: Started cri-containerd-f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9.scope. Oct 2 19:19:18.755075 systemd[1]: cri-containerd-f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9.scope: Deactivated successfully. Oct 2 19:19:19.413939 kubelet[1860]: E1002 19:19:19.002638 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:19.697536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9-rootfs.mount: Deactivated successfully. Oct 2 19:19:20.003600 kubelet[1860]: E1002 19:19:20.003532 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:20.748804 env[1300]: time="2023-10-02T19:19:20.748649501Z" level=error msg="get state for f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9" error="context deadline exceeded: unknown" Oct 2 19:19:20.748804 env[1300]: time="2023-10-02T19:19:20.748783603Z" level=warning msg="unknown status" status=0 Oct 2 19:19:21.004843 kubelet[1860]: E1002 19:19:21.004709 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:22.005803 kubelet[1860]: E1002 19:19:22.005748 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:22.053076 kubelet[1860]: E1002 19:19:22.053040 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:22.606736 env[1300]: time="2023-10-02T19:19:22.606657650Z" level=info msg="shim disconnected" id=f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9 Oct 2 19:19:22.606736 env[1300]: time="2023-10-02T19:19:22.606732651Z" level=warning msg="cleaning up after shim disconnected" id=f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9 namespace=k8s.io Oct 2 19:19:22.607275 env[1300]: time="2023-10-02T19:19:22.606747551Z" level=info msg="cleaning up dead shim" Oct 2 19:19:22.614529 env[1300]: time="2023-10-02T19:19:22.614483351Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2048 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:22.614848 env[1300]: time="2023-10-02T19:19:22.614746255Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Oct 2 19:19:22.616230 env[1300]: time="2023-10-02T19:19:22.616175073Z" level=error msg="Failed to pipe stderr of container \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\"" error="reading from a closed fifo" Oct 2 19:19:22.616429 env[1300]: time="2023-10-02T19:19:22.616385976Z" level=error msg="Failed to pipe stdout of container \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\"" error="reading from a closed fifo" Oct 2 19:19:22.620786 env[1300]: time="2023-10-02T19:19:22.620741232Z" level=error msg="StartContainer for \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:22.621032 kubelet[1860]: E1002 19:19:22.621007 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9" Oct 2 19:19:22.621483 kubelet[1860]: E1002 19:19:22.621237 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:22.621483 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:22.621483 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:19:22.621483 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mqfcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:22.621699 kubelet[1860]: E1002 19:19:22.621304 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:19:23.006936 kubelet[1860]: E1002 19:19:23.006884 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:23.265376 env[1300]: time="2023-10-02T19:19:23.265256670Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:19:23.293968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745287027.mount: Deactivated successfully. Oct 2 19:19:23.301046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647140076.mount: Deactivated successfully. Oct 2 19:19:23.363539 env[1300]: time="2023-10-02T19:19:23.363479407Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\"" Oct 2 19:19:23.364869 env[1300]: time="2023-10-02T19:19:23.364831524Z" level=info msg="StartContainer for \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\"" Oct 2 19:19:23.401870 systemd[1]: Started cri-containerd-cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260.scope. Oct 2 19:19:23.418773 systemd[1]: cri-containerd-cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260.scope: Deactivated successfully. Oct 2 19:19:23.607856 env[1300]: time="2023-10-02T19:19:23.607709485Z" level=info msg="shim disconnected" id=cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260 Oct 2 19:19:23.607856 env[1300]: time="2023-10-02T19:19:23.607774986Z" level=warning msg="cleaning up after shim disconnected" id=cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260 namespace=k8s.io Oct 2 19:19:23.607856 env[1300]: time="2023-10-02T19:19:23.607787686Z" level=info msg="cleaning up dead shim" Oct 2 19:19:23.616577 env[1300]: time="2023-10-02T19:19:23.616522597Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2085 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:23.616878 env[1300]: time="2023-10-02T19:19:23.616814800Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Oct 2 19:19:23.617092 env[1300]: time="2023-10-02T19:19:23.617059603Z" level=error msg="Failed to pipe stderr of container \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\"" error="reading from a closed fifo" Oct 2 19:19:23.617217 env[1300]: time="2023-10-02T19:19:23.617174805Z" level=error msg="Failed to pipe stdout of container \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\"" error="reading from a closed fifo" Oct 2 19:19:23.621930 env[1300]: time="2023-10-02T19:19:23.621882064Z" level=error msg="StartContainer for \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:23.622192 kubelet[1860]: E1002 19:19:23.622151 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260" Oct 2 19:19:23.622315 kubelet[1860]: E1002 19:19:23.622292 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:23.622315 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:23.622315 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:19:23.622315 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mqfcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:23.622552 kubelet[1860]: E1002 19:19:23.622342 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:19:24.007145 kubelet[1860]: E1002 19:19:24.007053 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:24.046078 env[1300]: time="2023-10-02T19:19:24.046013296Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:24.052614 env[1300]: time="2023-10-02T19:19:24.052543476Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:24.056181 env[1300]: time="2023-10-02T19:19:24.056138220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:24.065402 env[1300]: time="2023-10-02T19:19:24.065355834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:19:24.065887 env[1300]: time="2023-10-02T19:19:24.065851540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 19:19:24.067956 env[1300]: time="2023-10-02T19:19:24.067923166Z" level=info msg="CreateContainer within sandbox \"a1bddf7549b2e880e45a2ea3e40c32fca994fc9fe4eaaf24bfa0e274271f1504\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:19:24.124676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2015331562.mount: Deactivated successfully. Oct 2 19:19:24.138756 env[1300]: time="2023-10-02T19:19:24.138701237Z" level=info msg="CreateContainer within sandbox \"a1bddf7549b2e880e45a2ea3e40c32fca994fc9fe4eaaf24bfa0e274271f1504\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9334f00d61bcb37a6e191f00a33f6d829c6b43112439542f3a41cea43dde2fa4\"" Oct 2 19:19:24.139514 env[1300]: time="2023-10-02T19:19:24.139486347Z" level=info msg="StartContainer for \"9334f00d61bcb37a6e191f00a33f6d829c6b43112439542f3a41cea43dde2fa4\"" Oct 2 19:19:24.164593 systemd[1]: run-containerd-runc-k8s.io-9334f00d61bcb37a6e191f00a33f6d829c6b43112439542f3a41cea43dde2fa4-runc.g9rV8n.mount: Deactivated successfully. Oct 2 19:19:24.169525 systemd[1]: Started cri-containerd-9334f00d61bcb37a6e191f00a33f6d829c6b43112439542f3a41cea43dde2fa4.scope. Oct 2 19:19:24.201395 kernel: kauditd_printk_skb: 135 callbacks suppressed Oct 2 19:19:24.201558 kernel: audit: type=1400 audit(1696274364.184:593): avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.184000 audit[2107]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=1951 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.220054 kernel: audit: type=1300 audit(1696274364.184:593): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=1951 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333466303064363162636233376136653139316630306133336636 Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.254247 kernel: audit: type=1327 audit(1696274364.184:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333466303064363162636233376136653139316630306133336636 Oct 2 19:19:24.254376 kernel: audit: type=1400 audit(1696274364.184:594): avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.254402 kernel: audit: type=1400 audit(1696274364.184:594): avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.264953 env[1300]: time="2023-10-02T19:19:24.264849090Z" level=info msg="StartContainer for \"9334f00d61bcb37a6e191f00a33f6d829c6b43112439542f3a41cea43dde2fa4\" returns successfully" Oct 2 19:19:24.267377 kernel: audit: type=1400 audit(1696274364.184:594): avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.269712 kubelet[1860]: I1002 19:19:24.269250 1860 scope.go:115] "RemoveContainer" containerID="f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9" Oct 2 19:19:24.269712 kubelet[1860]: I1002 19:19:24.269583 1860 scope.go:115] "RemoveContainer" containerID="f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9" Oct 2 19:19:24.270999 env[1300]: time="2023-10-02T19:19:24.270971365Z" level=info msg="RemoveContainer for \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\"" Oct 2 19:19:24.274346 env[1300]: time="2023-10-02T19:19:24.274312306Z" level=info msg="RemoveContainer for \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\"" Oct 2 19:19:24.274553 env[1300]: time="2023-10-02T19:19:24.274524609Z" level=error msg="RemoveContainer for \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\" failed" error="failed to set removing state for container \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\": container is already in removing state" Oct 2 19:19:24.274827 kubelet[1860]: E1002 19:19:24.274750 1860 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\": container is already in removing state" containerID="f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9" Oct 2 19:19:24.274827 kubelet[1860]: I1002 19:19:24.274796 1860 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9} err="rpc error: code = Unknown desc = failed to set removing state for container \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\": container is already in removing state" Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.295208 kernel: audit: type=1400 audit(1696274364.184:594): avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.295308 kernel: audit: type=1400 audit(1696274364.184:594): avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.308315 env[1300]: time="2023-10-02T19:19:24.308274224Z" level=info msg="RemoveContainer for \"f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9\" returns successfully" Oct 2 19:19:24.309203 kubelet[1860]: E1002 19:19:24.308957 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.322506 kernel: audit: type=1400 audit(1696274364.184:594): avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.322602 kernel: audit: type=1400 audit(1696274364.184:594): avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.184000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.184000 audit: BPF prog-id=76 op=LOAD Oct 2 19:19:24.184000 audit[2107]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00014d9d8 a2=78 a3=c000210210 items=0 ppid=1951 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333466303064363162636233376136653139316630306133336636 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit: BPF prog-id=77 op=LOAD Oct 2 19:19:24.201000 audit[2107]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00014d770 a2=78 a3=c000210258 items=0 ppid=1951 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333466303064363162636233376136653139316630306133336636 Oct 2 19:19:24.201000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:19:24.201000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { perfmon } for pid=2107 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit[2107]: AVC avc: denied { bpf } for pid=2107 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:19:24.201000 audit: BPF prog-id=78 op=LOAD Oct 2 19:19:24.201000 audit[2107]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00014dc30 a2=78 a3=c0002102e8 items=0 ppid=1951 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333466303064363162636233376136653139316630306133336636 Oct 2 19:19:24.354009 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:19:24.354158 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:19:24.354185 kernel: IPVS: ipvs loaded. Oct 2 19:19:24.382227 kernel: IPVS: [rr] scheduler registered. Oct 2 19:19:24.392238 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:19:24.401137 kernel: IPVS: [sh] scheduler registered. Oct 2 19:19:24.455000 audit[2165]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.455000 audit[2165]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe413d7da0 a2=0 a3=7ffe413d7d8c items=0 ppid=2118 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:19:24.457000 audit[2166]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.457000 audit[2166]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd9d38d10 a2=0 a3=7fffd9d38cfc items=0 ppid=2118 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.457000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:19:24.458000 audit[2167]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.458000 audit[2167]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1fa4a240 a2=0 a3=7ffd1fa4a22c items=0 ppid=2118 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:19:24.459000 audit[2168]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.459000 audit[2168]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1b15b1e0 a2=0 a3=7ffc1b15b1cc items=0 ppid=2118 pid=2168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:19:24.459000 audit[2169]: NETFILTER_CFG table=nat:43 family=10 entries=1 op=nft_register_chain pid=2169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.459000 audit[2169]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3f46ad40 a2=0 a3=7ffc3f46ad2c items=0 ppid=2118 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.459000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:19:24.461000 audit[2170]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=2170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.461000 audit[2170]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff58d68980 a2=0 a3=7fff58d6896c items=0 ppid=2118 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:19:24.562000 audit[2172]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.562000 audit[2172]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff7f9038c0 a2=0 a3=7fff7f9038ac items=0 ppid=2118 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:19:24.567000 audit[2174]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.567000 audit[2174]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcffa9a580 a2=0 a3=7ffcffa9a56c items=0 ppid=2118 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:19:24.570000 audit[2177]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=2177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.570000 audit[2177]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff24673a60 a2=0 a3=7fff24673a4c items=0 ppid=2118 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:19:24.572000 audit[2178]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_chain pid=2178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.572000 audit[2178]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe149f7fd0 a2=0 a3=7ffe149f7fbc items=0 ppid=2118 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:19:24.574000 audit[2180]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.574000 audit[2180]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd48a6cdd0 a2=0 a3=7ffd48a6cdbc items=0 ppid=2118 pid=2180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:19:24.575000 audit[2181]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.575000 audit[2181]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe35259800 a2=0 a3=7ffe352597ec items=0 ppid=2118 pid=2181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:19:24.578000 audit[2183]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.578000 audit[2183]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd886dde80 a2=0 a3=7ffd886dde6c items=0 ppid=2118 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:19:24.581000 audit[2186]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.581000 audit[2186]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffdd4e16f0 a2=0 a3=7fffdd4e16dc items=0 ppid=2118 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:19:24.582000 audit[2187]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.582000 audit[2187]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc872f7760 a2=0 a3=7ffc872f774c items=0 ppid=2118 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:19:24.585000 audit[2189]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.585000 audit[2189]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff872c8220 a2=0 a3=7fff872c820c items=0 ppid=2118 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:19:24.586000 audit[2190]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.586000 audit[2190]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe759760a0 a2=0 a3=7ffe7597608c items=0 ppid=2118 pid=2190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:19:24.588000 audit[2192]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.588000 audit[2192]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8131d3b0 a2=0 a3=7fff8131d39c items=0 ppid=2118 pid=2192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:19:24.592000 audit[2195]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.592000 audit[2195]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc351d5ad0 a2=0 a3=7ffc351d5abc items=0 ppid=2118 pid=2195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:19:24.596000 audit[2198]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.596000 audit[2198]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef84d7d20 a2=0 a3=7ffef84d7d0c items=0 ppid=2118 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:19:24.597000 audit[2199]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=2199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.597000 audit[2199]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc961d0a0 a2=0 a3=7ffcc961d08c items=0 ppid=2118 pid=2199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.597000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:19:24.601000 audit[2201]: NETFILTER_CFG table=nat:60 family=2 entries=2 op=nft_register_chain pid=2201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.601000 audit[2201]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff197ad5a0 a2=0 a3=7fff197ad58c items=0 ppid=2118 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.601000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:19:24.604000 audit[2204]: NETFILTER_CFG table=nat:61 family=2 entries=2 op=nft_register_chain pid=2204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:19:24.604000 audit[2204]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff62bc6e20 a2=0 a3=7fff62bc6e0c items=0 ppid=2118 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:19:24.642000 audit[2208]: NETFILTER_CFG table=filter:62 family=2 entries=6 op=nft_register_rule pid=2208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:19:24.642000 audit[2208]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffc89412d60 a2=0 a3=7ffc89412d4c items=0 ppid=2118 pid=2208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:19:24.666000 audit[2208]: NETFILTER_CFG table=nat:63 family=2 entries=17 op=nft_register_chain pid=2208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:19:24.666000 audit[2208]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffc89412d60 a2=0 a3=7ffc89412d4c items=0 ppid=2118 pid=2208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.666000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:19:24.671000 audit[2212]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=2212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.671000 audit[2212]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcea963c70 a2=0 a3=7ffcea963c5c items=0 ppid=2118 pid=2212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:19:24.674000 audit[2214]: NETFILTER_CFG table=filter:65 family=10 entries=2 op=nft_register_chain pid=2214 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.674000 audit[2214]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffffefaba60 a2=0 a3=7ffffefaba4c items=0 ppid=2118 pid=2214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:19:24.677000 audit[2217]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.677000 audit[2217]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc999c1140 a2=0 a3=7ffc999c112c items=0 ppid=2118 pid=2217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:19:24.679000 audit[2218]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.679000 audit[2218]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce2d66c90 a2=0 a3=7ffce2d66c7c items=0 ppid=2118 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.679000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:19:24.681000 audit[2220]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_rule pid=2220 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.681000 audit[2220]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe0189f290 a2=0 a3=7ffe0189f27c items=0 ppid=2118 pid=2220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:19:24.682000 audit[2221]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_chain pid=2221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.682000 audit[2221]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb9e4d020 a2=0 a3=7fffb9e4d00c items=0 ppid=2118 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:19:24.684000 audit[2223]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_rule pid=2223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.684000 audit[2223]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffdbbf2560 a2=0 a3=7fffdbbf254c items=0 ppid=2118 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.684000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:19:24.688000 audit[2226]: NETFILTER_CFG table=filter:71 family=10 entries=2 op=nft_register_chain pid=2226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.688000 audit[2226]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc56f635a0 a2=0 a3=7ffc56f6358c items=0 ppid=2118 pid=2226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.688000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:19:24.689000 audit[2227]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=2227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.689000 audit[2227]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca00124b0 a2=0 a3=7ffca001249c items=0 ppid=2118 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:19:24.691000 audit[2229]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.691000 audit[2229]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff97e7da00 a2=0 a3=7fff97e7d9ec items=0 ppid=2118 pid=2229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:19:24.693000 audit[2230]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_chain pid=2230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.693000 audit[2230]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc49d9a370 a2=0 a3=7ffc49d9a35c items=0 ppid=2118 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.693000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:19:24.695000 audit[2232]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_rule pid=2232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.695000 audit[2232]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed22e2b30 a2=0 a3=7ffed22e2b1c items=0 ppid=2118 pid=2232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:19:24.699000 audit[2235]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.699000 audit[2235]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc285b15a0 a2=0 a3=7ffc285b158c items=0 ppid=2118 pid=2235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.699000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:19:24.702000 audit[2238]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.702000 audit[2238]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa51fd000 a2=0 a3=7fffa51fcfec items=0 ppid=2118 pid=2238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:19:24.703000 audit[2239]: NETFILTER_CFG table=nat:78 family=10 entries=1 op=nft_register_chain pid=2239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.703000 audit[2239]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc7fc88c40 a2=0 a3=7ffc7fc88c2c items=0 ppid=2118 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.703000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:19:24.705000 audit[2241]: NETFILTER_CFG table=nat:79 family=10 entries=2 op=nft_register_chain pid=2241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.705000 audit[2241]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffee68df070 a2=0 a3=7ffee68df05c items=0 ppid=2118 pid=2241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:19:24.709000 audit[2244]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:19:24.709000 audit[2244]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd2a5a6df0 a2=0 a3=7ffd2a5a6ddc items=0 ppid=2118 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:19:24.714000 audit[2248]: NETFILTER_CFG table=filter:81 family=10 entries=3 op=nft_register_rule pid=2248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:19:24.714000 audit[2248]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe6cdc7b10 a2=0 a3=7ffe6cdc7afc items=0 ppid=2118 pid=2248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.714000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:19:24.715000 audit[2248]: NETFILTER_CFG table=nat:82 family=10 entries=10 op=nft_register_chain pid=2248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:19:24.715000 audit[2248]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7ffe6cdc7b10 a2=0 a3=7ffe6cdc7afc items=0 ppid=2118 pid=2248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:19:24.715000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:19:25.008021 kubelet[1860]: E1002 19:19:25.007960 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:25.273768 kubelet[1860]: E1002 19:19:25.273382 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:19:25.610960 kubelet[1860]: W1002 19:19:25.610492 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4993a1bc_d12d_4d80_8674_1449084f234b.slice/cri-containerd-f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9.scope WatchSource:0}: container "f3785393bbfcf1cd0280bc905914943dd82cbfbf41f20405dee4180b0cc792d9" in namespace "k8s.io": not found Oct 2 19:19:26.009029 kubelet[1860]: E1002 19:19:26.008971 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:27.010106 kubelet[1860]: E1002 19:19:27.010040 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:27.054449 kubelet[1860]: E1002 19:19:27.054411 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:28.011128 kubelet[1860]: E1002 19:19:28.011049 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:28.718930 kubelet[1860]: W1002 19:19:28.718883 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4993a1bc_d12d_4d80_8674_1449084f234b.slice/cri-containerd-cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260.scope WatchSource:0}: task cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260 not found: not found Oct 2 19:19:29.011540 kubelet[1860]: E1002 19:19:29.011421 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:30.012145 kubelet[1860]: E1002 19:19:30.012082 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:31.013267 kubelet[1860]: E1002 19:19:31.013211 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:31.974619 kubelet[1860]: E1002 19:19:31.974552 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:32.014000 kubelet[1860]: E1002 19:19:32.013941 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:32.055881 kubelet[1860]: E1002 19:19:32.055831 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:33.014583 kubelet[1860]: E1002 19:19:33.014517 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:34.015458 kubelet[1860]: E1002 19:19:34.015396 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:35.015740 kubelet[1860]: E1002 19:19:35.015674 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:36.016136 kubelet[1860]: E1002 19:19:36.016067 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:37.016835 kubelet[1860]: E1002 19:19:37.016769 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:37.056661 kubelet[1860]: E1002 19:19:37.056625 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:38.017495 kubelet[1860]: E1002 19:19:38.017435 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:38.220161 env[1300]: time="2023-10-02T19:19:38.220103440Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:19:38.261239 env[1300]: time="2023-10-02T19:19:38.261184912Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\"" Oct 2 19:19:38.261850 env[1300]: time="2023-10-02T19:19:38.261807217Z" level=info msg="StartContainer for \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\"" Oct 2 19:19:38.287606 systemd[1]: Started cri-containerd-6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f.scope. Oct 2 19:19:38.300623 systemd[1]: cri-containerd-6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f.scope: Deactivated successfully. Oct 2 19:19:38.667041 env[1300]: time="2023-10-02T19:19:38.666497981Z" level=info msg="shim disconnected" id=6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f Oct 2 19:19:38.667041 env[1300]: time="2023-10-02T19:19:38.666569082Z" level=warning msg="cleaning up after shim disconnected" id=6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f namespace=k8s.io Oct 2 19:19:38.667041 env[1300]: time="2023-10-02T19:19:38.666582882Z" level=info msg="cleaning up dead shim" Oct 2 19:19:38.674925 env[1300]: time="2023-10-02T19:19:38.674878057Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2271 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:38.675248 env[1300]: time="2023-10-02T19:19:38.675184860Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:19:38.678224 env[1300]: time="2023-10-02T19:19:38.678167087Z" level=error msg="Failed to pipe stdout of container \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\"" error="reading from a closed fifo" Oct 2 19:19:38.681214 env[1300]: time="2023-10-02T19:19:38.681165214Z" level=error msg="Failed to pipe stderr of container \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\"" error="reading from a closed fifo" Oct 2 19:19:38.701324 env[1300]: time="2023-10-02T19:19:38.701258896Z" level=error msg="StartContainer for \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:38.701589 kubelet[1860]: E1002 19:19:38.701563 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f" Oct 2 19:19:38.701753 kubelet[1860]: E1002 19:19:38.701730 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:38.701753 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:38.701753 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:19:38.701753 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mqfcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:38.702014 kubelet[1860]: E1002 19:19:38.701798 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:19:39.018268 kubelet[1860]: E1002 19:19:39.018195 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:39.247107 systemd[1]: run-containerd-runc-k8s.io-6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f-runc.5uDN8a.mount: Deactivated successfully. Oct 2 19:19:39.247262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f-rootfs.mount: Deactivated successfully. Oct 2 19:19:39.301045 kubelet[1860]: I1002 19:19:39.300552 1860 scope.go:115] "RemoveContainer" containerID="cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260" Oct 2 19:19:39.301045 kubelet[1860]: I1002 19:19:39.301017 1860 scope.go:115] "RemoveContainer" containerID="cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260" Oct 2 19:19:39.302491 env[1300]: time="2023-10-02T19:19:39.302444884Z" level=info msg="RemoveContainer for \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\"" Oct 2 19:19:39.303480 env[1300]: time="2023-10-02T19:19:39.303440793Z" level=info msg="RemoveContainer for \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\"" Oct 2 19:19:39.303684 env[1300]: time="2023-10-02T19:19:39.303555194Z" level=error msg="RemoveContainer for \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\" failed" error="failed to set removing state for container \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\": container is already in removing state" Oct 2 19:19:39.304776 kubelet[1860]: E1002 19:19:39.304057 1860 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\": container is already in removing state" containerID="cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260" Oct 2 19:19:39.304776 kubelet[1860]: E1002 19:19:39.304136 1860 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260": container is already in removing state; Skipping pod "cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)" Oct 2 19:19:39.304776 kubelet[1860]: E1002 19:19:39.304550 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:19:39.317202 env[1300]: time="2023-10-02T19:19:39.317162015Z" level=info msg="RemoveContainer for \"cbc2151e320db16007a80cec22787f8f4cbff6cae17ddd3c42c1af157ab19260\" returns successfully" Oct 2 19:19:40.018736 kubelet[1860]: E1002 19:19:40.018676 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:41.018913 kubelet[1860]: E1002 19:19:41.018852 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:41.771677 kubelet[1860]: W1002 19:19:41.771629 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4993a1bc_d12d_4d80_8674_1449084f234b.slice/cri-containerd-6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f.scope WatchSource:0}: task 6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f not found: not found Oct 2 19:19:42.019551 kubelet[1860]: E1002 19:19:42.019495 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:42.058101 kubelet[1860]: E1002 19:19:42.057796 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:43.020605 kubelet[1860]: E1002 19:19:43.020543 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:44.021223 kubelet[1860]: E1002 19:19:44.021165 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:45.021384 kubelet[1860]: E1002 19:19:45.021314 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:46.022144 kubelet[1860]: E1002 19:19:46.022078 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:47.023032 kubelet[1860]: E1002 19:19:47.022977 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:47.058867 kubelet[1860]: E1002 19:19:47.058830 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:48.023247 kubelet[1860]: E1002 19:19:48.023190 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:49.024150 kubelet[1860]: E1002 19:19:49.024083 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:50.024570 kubelet[1860]: E1002 19:19:50.024516 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:51.025133 kubelet[1860]: E1002 19:19:51.025067 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:51.974401 kubelet[1860]: E1002 19:19:51.974348 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:52.025870 kubelet[1860]: E1002 19:19:52.025828 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:52.059398 kubelet[1860]: E1002 19:19:52.059367 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:52.210181 kubelet[1860]: E1002 19:19:52.210144 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:19:53.026074 kubelet[1860]: E1002 19:19:53.026012 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:54.027003 kubelet[1860]: E1002 19:19:54.026918 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:55.027169 kubelet[1860]: E1002 19:19:55.027098 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:56.028146 kubelet[1860]: E1002 19:19:56.028087 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:57.028877 kubelet[1860]: E1002 19:19:57.028819 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:57.060450 kubelet[1860]: E1002 19:19:57.060415 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:58.029109 kubelet[1860]: E1002 19:19:58.029052 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:59.029577 kubelet[1860]: E1002 19:19:59.029515 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:00.030289 kubelet[1860]: E1002 19:20:00.030234 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:01.031321 kubelet[1860]: E1002 19:20:01.031262 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:02.032036 kubelet[1860]: E1002 19:20:02.031971 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:02.061182 kubelet[1860]: E1002 19:20:02.061145 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:03.032635 kubelet[1860]: E1002 19:20:03.032576 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:03.212469 env[1300]: time="2023-10-02T19:20:03.212409398Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:20:03.259842 env[1300]: time="2023-10-02T19:20:03.259775476Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\"" Oct 2 19:20:03.260434 env[1300]: time="2023-10-02T19:20:03.260397179Z" level=info msg="StartContainer for \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\"" Oct 2 19:20:03.284975 systemd[1]: Started cri-containerd-86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9.scope. Oct 2 19:20:03.295353 systemd[1]: cri-containerd-86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9.scope: Deactivated successfully. Oct 2 19:20:03.295621 systemd[1]: Stopped cri-containerd-86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9.scope. Oct 2 19:20:03.299397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9-rootfs.mount: Deactivated successfully. Oct 2 19:20:03.318346 env[1300]: time="2023-10-02T19:20:03.318284919Z" level=info msg="shim disconnected" id=86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9 Oct 2 19:20:03.318346 env[1300]: time="2023-10-02T19:20:03.318345319Z" level=warning msg="cleaning up after shim disconnected" id=86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9 namespace=k8s.io Oct 2 19:20:03.318635 env[1300]: time="2023-10-02T19:20:03.318356119Z" level=info msg="cleaning up dead shim" Oct 2 19:20:03.326502 env[1300]: time="2023-10-02T19:20:03.326456667Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:20:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2311 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:20:03Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:20:03.326794 env[1300]: time="2023-10-02T19:20:03.326730468Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:20:03.327222 env[1300]: time="2023-10-02T19:20:03.327172771Z" level=error msg="Failed to pipe stdout of container \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\"" error="reading from a closed fifo" Oct 2 19:20:03.327298 env[1300]: time="2023-10-02T19:20:03.327258071Z" level=error msg="Failed to pipe stderr of container \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\"" error="reading from a closed fifo" Oct 2 19:20:03.331982 env[1300]: time="2023-10-02T19:20:03.331938799Z" level=error msg="StartContainer for \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:20:03.332255 kubelet[1860]: E1002 19:20:03.332232 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9" Oct 2 19:20:03.332394 kubelet[1860]: E1002 19:20:03.332361 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:20:03.332394 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:20:03.332394 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:20:03.332394 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mqfcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:20:03.332593 kubelet[1860]: E1002 19:20:03.332414 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:20:03.346667 kubelet[1860]: I1002 19:20:03.346566 1860 scope.go:115] "RemoveContainer" containerID="6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f" Oct 2 19:20:03.346992 kubelet[1860]: I1002 19:20:03.346936 1860 scope.go:115] "RemoveContainer" containerID="6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f" Oct 2 19:20:03.350066 env[1300]: time="2023-10-02T19:20:03.350024605Z" level=info msg="RemoveContainer for \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\"" Oct 2 19:20:03.350301 env[1300]: time="2023-10-02T19:20:03.350261606Z" level=info msg="RemoveContainer for \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\"" Oct 2 19:20:03.350537 env[1300]: time="2023-10-02T19:20:03.350501608Z" level=error msg="RemoveContainer for \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\" failed" error="failed to set removing state for container \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\": container is already in removing state" Oct 2 19:20:03.350661 kubelet[1860]: E1002 19:20:03.350643 1860 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\": container is already in removing state" containerID="6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f" Oct 2 19:20:03.350739 kubelet[1860]: I1002 19:20:03.350679 1860 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f} err="rpc error: code = Unknown desc = failed to set removing state for container \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\": container is already in removing state" Oct 2 19:20:03.359772 env[1300]: time="2023-10-02T19:20:03.359736462Z" level=info msg="RemoveContainer for \"6c376ecc8e6d37e5531272c4260c8c97fa0940c0bdbaba5e7face70bc4a9b69f\" returns successfully" Oct 2 19:20:03.362295 kubelet[1860]: E1002 19:20:03.361993 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:20:04.033265 kubelet[1860]: E1002 19:20:04.033208 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:05.033532 kubelet[1860]: E1002 19:20:05.033470 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:06.034203 kubelet[1860]: E1002 19:20:06.034141 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:06.424751 kubelet[1860]: W1002 19:20:06.424624 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4993a1bc_d12d_4d80_8674_1449084f234b.slice/cri-containerd-86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9.scope WatchSource:0}: task 86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9 not found: not found Oct 2 19:20:07.035037 kubelet[1860]: E1002 19:20:07.034972 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:07.062895 kubelet[1860]: E1002 19:20:07.062854 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:08.036029 kubelet[1860]: E1002 19:20:08.035966 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:09.036573 kubelet[1860]: E1002 19:20:09.036511 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:10.036953 kubelet[1860]: E1002 19:20:10.036899 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:11.037232 kubelet[1860]: E1002 19:20:11.037171 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:11.974361 kubelet[1860]: E1002 19:20:11.974299 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:12.037900 kubelet[1860]: E1002 19:20:12.037838 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:12.064072 kubelet[1860]: E1002 19:20:12.064037 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:13.038073 kubelet[1860]: E1002 19:20:13.038004 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:14.038782 kubelet[1860]: E1002 19:20:14.038718 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:15.038969 kubelet[1860]: E1002 19:20:15.038907 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:16.039223 kubelet[1860]: E1002 19:20:16.039159 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:17.039774 kubelet[1860]: E1002 19:20:17.039714 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:17.065780 kubelet[1860]: E1002 19:20:17.065743 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:18.040452 kubelet[1860]: E1002 19:20:18.040392 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:18.210679 kubelet[1860]: E1002 19:20:18.210633 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:20:19.040614 kubelet[1860]: E1002 19:20:19.040544 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:20.041522 kubelet[1860]: E1002 19:20:20.041459 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:21.041757 kubelet[1860]: E1002 19:20:21.041696 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:22.042755 kubelet[1860]: E1002 19:20:22.042696 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:22.066704 kubelet[1860]: E1002 19:20:22.066676 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:23.043086 kubelet[1860]: E1002 19:20:23.043027 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:24.043344 kubelet[1860]: E1002 19:20:24.043273 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:25.044021 kubelet[1860]: E1002 19:20:25.043961 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:26.045109 kubelet[1860]: E1002 19:20:26.045053 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:27.045373 kubelet[1860]: E1002 19:20:27.045306 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:27.068396 kubelet[1860]: E1002 19:20:27.068357 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:28.045760 kubelet[1860]: E1002 19:20:28.045699 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:29.046736 kubelet[1860]: E1002 19:20:29.046676 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:29.209906 kubelet[1860]: E1002 19:20:29.209856 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:20:30.047912 kubelet[1860]: E1002 19:20:30.047850 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:31.048524 kubelet[1860]: E1002 19:20:31.048463 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:31.973848 kubelet[1860]: E1002 19:20:31.973795 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:32.049433 kubelet[1860]: E1002 19:20:32.049372 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:32.069602 kubelet[1860]: E1002 19:20:32.069569 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:33.049605 kubelet[1860]: E1002 19:20:33.049542 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:34.050562 kubelet[1860]: E1002 19:20:34.050501 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:35.051676 kubelet[1860]: E1002 19:20:35.051608 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:36.051847 kubelet[1860]: E1002 19:20:36.051785 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:37.052547 kubelet[1860]: E1002 19:20:37.052487 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:37.070318 kubelet[1860]: E1002 19:20:37.070277 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:38.053448 kubelet[1860]: E1002 19:20:38.053387 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:39.053622 kubelet[1860]: E1002 19:20:39.053555 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:40.054273 kubelet[1860]: E1002 19:20:40.054207 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:41.054822 kubelet[1860]: E1002 19:20:41.054759 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:42.055859 kubelet[1860]: E1002 19:20:42.055791 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:42.071128 kubelet[1860]: E1002 19:20:42.071088 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:42.210902 kubelet[1860]: E1002 19:20:42.210467 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:20:43.056498 kubelet[1860]: E1002 19:20:43.056443 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:44.057368 kubelet[1860]: E1002 19:20:44.057309 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:45.057751 kubelet[1860]: E1002 19:20:45.057682 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:46.058152 kubelet[1860]: E1002 19:20:46.058086 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:47.058362 kubelet[1860]: E1002 19:20:47.058299 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:47.072355 kubelet[1860]: E1002 19:20:47.072317 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:48.058896 kubelet[1860]: E1002 19:20:48.058840 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:49.059853 kubelet[1860]: E1002 19:20:49.059790 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:50.060484 kubelet[1860]: E1002 19:20:50.060423 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:51.061204 kubelet[1860]: E1002 19:20:51.061144 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:51.973926 kubelet[1860]: E1002 19:20:51.973861 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:52.062106 kubelet[1860]: E1002 19:20:52.062076 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:52.072764 kubelet[1860]: E1002 19:20:52.072738 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:53.062979 kubelet[1860]: E1002 19:20:53.062914 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:53.212592 env[1300]: time="2023-10-02T19:20:53.212537983Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:20:53.248627 env[1300]: time="2023-10-02T19:20:53.248572419Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\"" Oct 2 19:20:53.249327 env[1300]: time="2023-10-02T19:20:53.249111121Z" level=info msg="StartContainer for \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\"" Oct 2 19:20:53.268343 systemd[1]: Started cri-containerd-61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65.scope. Oct 2 19:20:53.285098 systemd[1]: cri-containerd-61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65.scope: Deactivated successfully. Oct 2 19:20:53.289671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65-rootfs.mount: Deactivated successfully. Oct 2 19:20:53.314404 env[1300]: time="2023-10-02T19:20:53.313760765Z" level=info msg="shim disconnected" id=61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65 Oct 2 19:20:53.314404 env[1300]: time="2023-10-02T19:20:53.313825065Z" level=warning msg="cleaning up after shim disconnected" id=61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65 namespace=k8s.io Oct 2 19:20:53.314404 env[1300]: time="2023-10-02T19:20:53.313836365Z" level=info msg="cleaning up dead shim" Oct 2 19:20:53.322183 env[1300]: time="2023-10-02T19:20:53.322138296Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:20:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2354 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:20:53Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:20:53.322480 env[1300]: time="2023-10-02T19:20:53.322419797Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:20:53.325216 env[1300]: time="2023-10-02T19:20:53.325169208Z" level=error msg="Failed to pipe stderr of container \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\"" error="reading from a closed fifo" Oct 2 19:20:53.325327 env[1300]: time="2023-10-02T19:20:53.325264408Z" level=error msg="Failed to pipe stdout of container \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\"" error="reading from a closed fifo" Oct 2 19:20:53.330038 env[1300]: time="2023-10-02T19:20:53.329993126Z" level=error msg="StartContainer for \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:20:53.330304 kubelet[1860]: E1002 19:20:53.330279 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65" Oct 2 19:20:53.330449 kubelet[1860]: E1002 19:20:53.330419 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:20:53.330449 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:20:53.330449 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:20:53.330449 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mqfcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:20:53.330646 kubelet[1860]: E1002 19:20:53.330468 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:20:53.437233 kubelet[1860]: I1002 19:20:53.437196 1860 scope.go:115] "RemoveContainer" containerID="86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9" Oct 2 19:20:53.437589 kubelet[1860]: I1002 19:20:53.437566 1860 scope.go:115] "RemoveContainer" containerID="86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9" Oct 2 19:20:53.439437 env[1300]: time="2023-10-02T19:20:53.439386639Z" level=info msg="RemoveContainer for \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\"" Oct 2 19:20:53.439608 env[1300]: time="2023-10-02T19:20:53.439578539Z" level=info msg="RemoveContainer for \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\"" Oct 2 19:20:53.439789 env[1300]: time="2023-10-02T19:20:53.439743240Z" level=error msg="RemoveContainer for \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\" failed" error="failed to set removing state for container \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\": container is already in removing state" Oct 2 19:20:53.439977 kubelet[1860]: E1002 19:20:53.439958 1860 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\": container is already in removing state" containerID="86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9" Oct 2 19:20:53.440065 kubelet[1860]: E1002 19:20:53.439993 1860 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9": container is already in removing state; Skipping pod "cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)" Oct 2 19:20:53.440323 kubelet[1860]: E1002 19:20:53.440294 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:20:53.451907 env[1300]: time="2023-10-02T19:20:53.451869186Z" level=info msg="RemoveContainer for \"86716615d0d1f2dac173b2e5c6c537fcd68e7a4d126f31ef63229de5eb2316f9\" returns successfully" Oct 2 19:20:54.063746 kubelet[1860]: E1002 19:20:54.063682 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:55.064292 kubelet[1860]: E1002 19:20:55.064228 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:56.064662 kubelet[1860]: E1002 19:20:56.064603 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:56.419604 kubelet[1860]: W1002 19:20:56.419477 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4993a1bc_d12d_4d80_8674_1449084f234b.slice/cri-containerd-61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65.scope WatchSource:0}: task 61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65 not found: not found Oct 2 19:20:57.065764 kubelet[1860]: E1002 19:20:57.065696 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:57.073501 kubelet[1860]: E1002 19:20:57.073478 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:58.066135 kubelet[1860]: E1002 19:20:58.066057 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:59.067059 kubelet[1860]: E1002 19:20:59.066996 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:00.067435 kubelet[1860]: E1002 19:21:00.067372 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:01.068032 kubelet[1860]: E1002 19:21:01.067970 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:02.068541 kubelet[1860]: E1002 19:21:02.068491 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:02.074315 kubelet[1860]: E1002 19:21:02.074273 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:03.069475 kubelet[1860]: E1002 19:21:03.069417 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:04.069605 kubelet[1860]: E1002 19:21:04.069544 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:05.070037 kubelet[1860]: E1002 19:21:05.069977 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:06.070420 kubelet[1860]: E1002 19:21:06.070355 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:07.071094 kubelet[1860]: E1002 19:21:07.071032 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:07.075790 kubelet[1860]: E1002 19:21:07.075755 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:08.071263 kubelet[1860]: E1002 19:21:08.071195 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:08.211351 kubelet[1860]: E1002 19:21:08.210560 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:21:09.071447 kubelet[1860]: E1002 19:21:09.071388 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:10.071810 kubelet[1860]: E1002 19:21:10.071743 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:11.072378 kubelet[1860]: E1002 19:21:11.072312 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:11.974527 kubelet[1860]: E1002 19:21:11.974469 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:12.072817 kubelet[1860]: E1002 19:21:12.072779 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:12.076359 kubelet[1860]: E1002 19:21:12.076328 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:13.073450 kubelet[1860]: E1002 19:21:13.073381 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:14.074483 kubelet[1860]: E1002 19:21:14.074427 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:15.074616 kubelet[1860]: E1002 19:21:15.074557 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:16.075139 kubelet[1860]: E1002 19:21:16.075079 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:17.075786 kubelet[1860]: E1002 19:21:17.075728 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:17.077446 kubelet[1860]: E1002 19:21:17.077421 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:18.076489 kubelet[1860]: E1002 19:21:18.076436 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:19.076849 kubelet[1860]: E1002 19:21:19.076790 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:20.077054 kubelet[1860]: E1002 19:21:20.076993 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:20.210072 kubelet[1860]: E1002 19:21:20.210031 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:21:21.078104 kubelet[1860]: E1002 19:21:21.078043 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:22.078183 kubelet[1860]: E1002 19:21:22.078153 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:22.078639 kubelet[1860]: E1002 19:21:22.078153 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:23.079641 kubelet[1860]: E1002 19:21:23.079579 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:24.080079 kubelet[1860]: E1002 19:21:24.080018 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:25.080655 kubelet[1860]: E1002 19:21:25.080593 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:26.081765 kubelet[1860]: E1002 19:21:26.081698 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:27.079630 kubelet[1860]: E1002 19:21:27.079581 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:27.082787 kubelet[1860]: E1002 19:21:27.082763 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:28.083713 kubelet[1860]: E1002 19:21:28.083652 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:29.084706 kubelet[1860]: E1002 19:21:29.084640 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:30.085094 kubelet[1860]: E1002 19:21:30.085029 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:31.085287 kubelet[1860]: E1002 19:21:31.085224 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:31.210205 kubelet[1860]: E1002 19:21:31.210136 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:21:31.973609 kubelet[1860]: E1002 19:21:31.973555 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:32.080398 kubelet[1860]: E1002 19:21:32.080359 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:32.085766 kubelet[1860]: E1002 19:21:32.085743 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:33.086849 kubelet[1860]: E1002 19:21:33.086782 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:34.087026 kubelet[1860]: E1002 19:21:34.086957 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:35.087647 kubelet[1860]: E1002 19:21:35.087590 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:36.088263 kubelet[1860]: E1002 19:21:36.088206 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:37.081388 kubelet[1860]: E1002 19:21:37.081350 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:37.088662 kubelet[1860]: E1002 19:21:37.088638 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:38.089806 kubelet[1860]: E1002 19:21:38.089745 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:39.090917 kubelet[1860]: E1002 19:21:39.090855 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:40.091993 kubelet[1860]: E1002 19:21:40.091938 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:41.092354 kubelet[1860]: E1002 19:21:41.092294 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:42.082841 kubelet[1860]: E1002 19:21:42.082802 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:42.093255 kubelet[1860]: E1002 19:21:42.093217 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:43.094288 kubelet[1860]: E1002 19:21:43.094223 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:44.095319 kubelet[1860]: E1002 19:21:44.095258 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:45.096388 kubelet[1860]: E1002 19:21:45.096320 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:45.210428 kubelet[1860]: E1002 19:21:45.210381 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:21:46.097242 kubelet[1860]: E1002 19:21:46.097179 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:47.083695 kubelet[1860]: E1002 19:21:47.083656 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:47.098078 kubelet[1860]: E1002 19:21:47.098040 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:48.099135 kubelet[1860]: E1002 19:21:48.099053 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:49.099256 kubelet[1860]: E1002 19:21:49.099194 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:50.100326 kubelet[1860]: E1002 19:21:50.100258 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:51.101351 kubelet[1860]: E1002 19:21:51.101291 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:51.974199 kubelet[1860]: E1002 19:21:51.974144 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:52.085135 kubelet[1860]: E1002 19:21:52.085098 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:52.102458 kubelet[1860]: E1002 19:21:52.102400 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:53.102786 kubelet[1860]: E1002 19:21:53.102725 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:54.103607 kubelet[1860]: E1002 19:21:54.103547 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:55.104616 kubelet[1860]: E1002 19:21:55.104551 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:56.105110 kubelet[1860]: E1002 19:21:56.105048 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:57.085961 kubelet[1860]: E1002 19:21:57.085920 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:57.105473 kubelet[1860]: E1002 19:21:57.105417 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:58.106309 kubelet[1860]: E1002 19:21:58.106245 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:58.209925 kubelet[1860]: E1002 19:21:58.209885 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:21:59.107272 kubelet[1860]: E1002 19:21:59.107214 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:00.107811 kubelet[1860]: E1002 19:22:00.107752 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:01.108680 kubelet[1860]: E1002 19:22:01.108624 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:02.086695 kubelet[1860]: E1002 19:22:02.086664 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:02.109069 kubelet[1860]: E1002 19:22:02.109010 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:03.109909 kubelet[1860]: E1002 19:22:03.109853 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:04.111016 kubelet[1860]: E1002 19:22:04.110953 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:05.111468 kubelet[1860]: E1002 19:22:05.111412 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:06.112428 kubelet[1860]: E1002 19:22:06.112377 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:07.088267 kubelet[1860]: E1002 19:22:07.088233 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:07.112743 kubelet[1860]: E1002 19:22:07.112685 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:08.112996 kubelet[1860]: E1002 19:22:08.112956 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:09.113758 kubelet[1860]: E1002 19:22:09.113697 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:10.114467 kubelet[1860]: E1002 19:22:10.114409 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:11.115440 kubelet[1860]: E1002 19:22:11.115384 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:11.210146 kubelet[1860]: E1002 19:22:11.210086 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:22:11.974414 kubelet[1860]: E1002 19:22:11.974358 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:12.089182 kubelet[1860]: E1002 19:22:12.089138 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:12.115644 kubelet[1860]: E1002 19:22:12.115585 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:13.116293 kubelet[1860]: E1002 19:22:13.116230 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:14.116482 kubelet[1860]: E1002 19:22:14.116387 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:15.118041 kubelet[1860]: E1002 19:22:15.117986 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:16.118772 kubelet[1860]: E1002 19:22:16.118719 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:17.090712 kubelet[1860]: E1002 19:22:17.090676 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:17.119161 kubelet[1860]: E1002 19:22:17.119096 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:18.119746 kubelet[1860]: E1002 19:22:18.119581 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:19.120280 kubelet[1860]: E1002 19:22:19.120229 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:20.121242 kubelet[1860]: E1002 19:22:20.121187 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:21.122304 kubelet[1860]: E1002 19:22:21.122242 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:22.091728 kubelet[1860]: E1002 19:22:22.091684 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:22.123134 kubelet[1860]: E1002 19:22:22.123051 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:22.219434 env[1300]: time="2023-10-02T19:22:22.219381906Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:22:22.254480 env[1300]: time="2023-10-02T19:22:22.254421589Z" level=info msg="CreateContainer within sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a\"" Oct 2 19:22:22.255071 env[1300]: time="2023-10-02T19:22:22.255031697Z" level=info msg="StartContainer for \"115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a\"" Oct 2 19:22:22.275638 systemd[1]: Started cri-containerd-115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a.scope. Oct 2 19:22:22.283521 systemd[1]: run-containerd-runc-k8s.io-115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a-runc.HeGoVv.mount: Deactivated successfully. Oct 2 19:22:22.297791 systemd[1]: cri-containerd-115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a.scope: Deactivated successfully. Oct 2 19:22:22.331164 env[1300]: time="2023-10-02T19:22:22.331073245Z" level=info msg="shim disconnected" id=115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a Oct 2 19:22:22.331164 env[1300]: time="2023-10-02T19:22:22.331163846Z" level=warning msg="cleaning up after shim disconnected" id=115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a namespace=k8s.io Oct 2 19:22:22.331491 env[1300]: time="2023-10-02T19:22:22.331177246Z" level=info msg="cleaning up dead shim" Oct 2 19:22:22.339790 env[1300]: time="2023-10-02T19:22:22.339728064Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2399 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:22:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:22:22.340105 env[1300]: time="2023-10-02T19:22:22.340028668Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:22:22.343306 env[1300]: time="2023-10-02T19:22:22.343169711Z" level=error msg="Failed to pipe stdout of container \"115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a\"" error="reading from a closed fifo" Oct 2 19:22:22.343816 env[1300]: time="2023-10-02T19:22:22.343768320Z" level=error msg="Failed to pipe stderr of container \"115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a\"" error="reading from a closed fifo" Oct 2 19:22:22.348652 env[1300]: time="2023-10-02T19:22:22.348584586Z" level=error msg="StartContainer for \"115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:22:22.348897 kubelet[1860]: E1002 19:22:22.348874 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a" Oct 2 19:22:22.349092 kubelet[1860]: E1002 19:22:22.349008 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:22:22.349092 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:22:22.349092 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:22:22.349092 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mqfcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:22:22.349351 kubelet[1860]: E1002 19:22:22.349054 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:22:22.589593 kubelet[1860]: I1002 19:22:22.589479 1860 scope.go:115] "RemoveContainer" containerID="61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65" Oct 2 19:22:22.589811 kubelet[1860]: I1002 19:22:22.589777 1860 scope.go:115] "RemoveContainer" containerID="61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65" Oct 2 19:22:22.591167 env[1300]: time="2023-10-02T19:22:22.591107427Z" level=info msg="RemoveContainer for \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\"" Oct 2 19:22:22.591932 env[1300]: time="2023-10-02T19:22:22.591894938Z" level=info msg="RemoveContainer for \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\"" Oct 2 19:22:22.592354 env[1300]: time="2023-10-02T19:22:22.592312144Z" level=error msg="RemoveContainer for \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\" failed" error="failed to set removing state for container \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\": container is already in removing state" Oct 2 19:22:22.592530 kubelet[1860]: E1002 19:22:22.592494 1860 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\": container is already in removing state" containerID="61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65" Oct 2 19:22:22.592641 kubelet[1860]: E1002 19:22:22.592551 1860 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65": container is already in removing state; Skipping pod "cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)" Oct 2 19:22:22.593195 kubelet[1860]: E1002 19:22:22.593170 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-x2g4b_kube-system(4993a1bc-d12d-4d80-8674-1449084f234b)\"" pod="kube-system/cilium-x2g4b" podUID=4993a1bc-d12d-4d80-8674-1449084f234b Oct 2 19:22:22.600460 env[1300]: time="2023-10-02T19:22:22.599750546Z" level=info msg="RemoveContainer for \"61df6450f2f6367d413aa2564d1a55289376e25267b3e0c6156cff0da289dc65\" returns successfully" Oct 2 19:22:23.123801 kubelet[1860]: E1002 19:22:23.123746 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:23.243075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a-rootfs.mount: Deactivated successfully. Oct 2 19:22:24.124133 kubelet[1860]: E1002 19:22:24.124076 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:25.125249 kubelet[1860]: E1002 19:22:25.125189 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:25.437668 kubelet[1860]: W1002 19:22:25.437626 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4993a1bc_d12d_4d80_8674_1449084f234b.slice/cri-containerd-115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a.scope WatchSource:0}: task 115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a not found: not found Oct 2 19:22:26.126278 kubelet[1860]: E1002 19:22:26.126212 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:27.092964 kubelet[1860]: E1002 19:22:27.092926 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:27.126805 kubelet[1860]: E1002 19:22:27.126738 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:28.127832 kubelet[1860]: E1002 19:22:28.127776 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:29.128551 kubelet[1860]: E1002 19:22:29.128497 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:30.128982 kubelet[1860]: E1002 19:22:30.128919 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:31.129285 kubelet[1860]: E1002 19:22:31.129228 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:31.973915 kubelet[1860]: E1002 19:22:31.973854 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:32.094540 kubelet[1860]: E1002 19:22:32.094507 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:32.129905 kubelet[1860]: E1002 19:22:32.129847 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:33.130422 kubelet[1860]: E1002 19:22:33.130360 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:34.131184 kubelet[1860]: E1002 19:22:34.131107 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:35.132924 kubelet[1860]: E1002 19:22:35.131889 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:35.133800 env[1300]: time="2023-10-02T19:22:35.133749627Z" level=info msg="StopPodSandbox for \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\"" Oct 2 19:22:35.136716 env[1300]: time="2023-10-02T19:22:35.133834628Z" level=info msg="Container to stop \"115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:22:35.136104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399-shm.mount: Deactivated successfully. Oct 2 19:22:35.143741 systemd[1]: cri-containerd-d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399.scope: Deactivated successfully. Oct 2 19:22:35.142000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:22:35.147798 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:22:35.147893 kernel: audit: type=1334 audit(1696274555.142:643): prog-id=68 op=UNLOAD Oct 2 19:22:35.155000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:22:35.162131 kernel: audit: type=1334 audit(1696274555.155:644): prog-id=71 op=UNLOAD Oct 2 19:22:35.178918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399-rootfs.mount: Deactivated successfully. Oct 2 19:22:35.214218 env[1300]: time="2023-10-02T19:22:35.214133726Z" level=info msg="shim disconnected" id=d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399 Oct 2 19:22:35.214218 env[1300]: time="2023-10-02T19:22:35.214198427Z" level=warning msg="cleaning up after shim disconnected" id=d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399 namespace=k8s.io Oct 2 19:22:35.214218 env[1300]: time="2023-10-02T19:22:35.214215727Z" level=info msg="cleaning up dead shim" Oct 2 19:22:35.222741 env[1300]: time="2023-10-02T19:22:35.222696033Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2433 runtime=io.containerd.runc.v2\n" Oct 2 19:22:35.223051 env[1300]: time="2023-10-02T19:22:35.223020637Z" level=info msg="TearDown network for sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" successfully" Oct 2 19:22:35.223051 env[1300]: time="2023-10-02T19:22:35.223048337Z" level=info msg="StopPodSandbox for \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" returns successfully" Oct 2 19:22:35.354239 kubelet[1860]: I1002 19:22:35.354197 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-etc-cni-netd\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354459 kubelet[1860]: I1002 19:22:35.354281 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4993a1bc-d12d-4d80-8674-1449084f234b-clustermesh-secrets\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354459 kubelet[1860]: I1002 19:22:35.354322 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-hostproc\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354459 kubelet[1860]: I1002 19:22:35.354343 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-host-proc-sys-net\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354459 kubelet[1860]: I1002 19:22:35.354363 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cni-path\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354459 kubelet[1860]: I1002 19:22:35.354386 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-lib-modules\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354459 kubelet[1860]: I1002 19:22:35.354427 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqfcr\" (UniqueName: \"kubernetes.io/projected/4993a1bc-d12d-4d80-8674-1449084f234b-kube-api-access-mqfcr\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354729 kubelet[1860]: I1002 19:22:35.354450 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-bpf-maps\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354729 kubelet[1860]: I1002 19:22:35.354486 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-cgroup\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354729 kubelet[1860]: I1002 19:22:35.354521 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-config-path\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354729 kubelet[1860]: I1002 19:22:35.354561 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-host-proc-sys-kernel\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354729 kubelet[1860]: I1002 19:22:35.354590 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-run\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.354729 kubelet[1860]: I1002 19:22:35.354614 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-xtables-lock\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.355046 kubelet[1860]: I1002 19:22:35.354654 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4993a1bc-d12d-4d80-8674-1449084f234b-hubble-tls\") pod \"4993a1bc-d12d-4d80-8674-1449084f234b\" (UID: \"4993a1bc-d12d-4d80-8674-1449084f234b\") " Oct 2 19:22:35.357139 kubelet[1860]: I1002 19:22:35.355491 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:35.357139 kubelet[1860]: I1002 19:22:35.355561 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:35.357139 kubelet[1860]: W1002 19:22:35.355709 1860 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4993a1bc-d12d-4d80-8674-1449084f234b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:22:35.358161 kubelet[1860]: I1002 19:22:35.358109 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:22:35.358260 kubelet[1860]: I1002 19:22:35.358193 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:35.358260 kubelet[1860]: I1002 19:22:35.358218 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:35.358260 kubelet[1860]: I1002 19:22:35.358237 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:35.358390 kubelet[1860]: I1002 19:22:35.358257 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-hostproc" (OuterVolumeSpecName: "hostproc") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:35.358390 kubelet[1860]: I1002 19:22:35.354212 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:35.358666 kubelet[1860]: I1002 19:22:35.358641 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:35.358795 kubelet[1860]: I1002 19:22:35.358777 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cni-path" (OuterVolumeSpecName: "cni-path") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:35.358900 kubelet[1860]: I1002 19:22:35.358881 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:35.361233 systemd[1]: var-lib-kubelet-pods-4993a1bc\x2dd12d\x2d4d80\x2d8674\x2d1449084f234b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmqfcr.mount: Deactivated successfully. Oct 2 19:22:35.362662 kubelet[1860]: I1002 19:22:35.362627 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4993a1bc-d12d-4d80-8674-1449084f234b-kube-api-access-mqfcr" (OuterVolumeSpecName: "kube-api-access-mqfcr") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "kube-api-access-mqfcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:22:35.365450 systemd[1]: var-lib-kubelet-pods-4993a1bc\x2dd12d\x2d4d80\x2d8674\x2d1449084f234b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:22:35.366370 kubelet[1860]: I1002 19:22:35.366348 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4993a1bc-d12d-4d80-8674-1449084f234b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:22:35.369655 kubelet[1860]: I1002 19:22:35.369630 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4993a1bc-d12d-4d80-8674-1449084f234b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4993a1bc-d12d-4d80-8674-1449084f234b" (UID: "4993a1bc-d12d-4d80-8674-1449084f234b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:22:35.369693 systemd[1]: var-lib-kubelet-pods-4993a1bc\x2dd12d\x2d4d80\x2d8674\x2d1449084f234b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:22:35.455522 kubelet[1860]: I1002 19:22:35.455472 1860 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-etc-cni-netd\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.455751 kubelet[1860]: I1002 19:22:35.455537 1860 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4993a1bc-d12d-4d80-8674-1449084f234b-clustermesh-secrets\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.455751 kubelet[1860]: I1002 19:22:35.455554 1860 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-hostproc\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.455751 kubelet[1860]: I1002 19:22:35.455569 1860 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-host-proc-sys-net\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.455751 kubelet[1860]: I1002 19:22:35.455585 1860 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cni-path\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.455751 kubelet[1860]: I1002 19:22:35.455598 1860 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-lib-modules\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.455751 kubelet[1860]: I1002 19:22:35.455613 1860 reconciler.go:399] "Volume detached for volume \"kube-api-access-mqfcr\" (UniqueName: \"kubernetes.io/projected/4993a1bc-d12d-4d80-8674-1449084f234b-kube-api-access-mqfcr\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.455751 kubelet[1860]: I1002 19:22:35.455628 1860 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-bpf-maps\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.455751 kubelet[1860]: I1002 19:22:35.455642 1860 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-cgroup\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.456021 kubelet[1860]: I1002 19:22:35.455656 1860 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-config-path\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.456021 kubelet[1860]: I1002 19:22:35.455671 1860 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-host-proc-sys-kernel\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.456021 kubelet[1860]: I1002 19:22:35.455688 1860 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-cilium-run\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.456021 kubelet[1860]: I1002 19:22:35.455702 1860 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4993a1bc-d12d-4d80-8674-1449084f234b-xtables-lock\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.456021 kubelet[1860]: I1002 19:22:35.455721 1860 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4993a1bc-d12d-4d80-8674-1449084f234b-hubble-tls\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:35.614659 kubelet[1860]: I1002 19:22:35.614627 1860 scope.go:115] "RemoveContainer" containerID="115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a" Oct 2 19:22:35.616741 env[1300]: time="2023-10-02T19:22:35.616376128Z" level=info msg="RemoveContainer for \"115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a\"" Oct 2 19:22:35.619012 systemd[1]: Removed slice kubepods-burstable-pod4993a1bc_d12d_4d80_8674_1449084f234b.slice. Oct 2 19:22:35.627168 env[1300]: time="2023-10-02T19:22:35.627135162Z" level=info msg="RemoveContainer for \"115d5688fa88b101304db1a8e96a994f6c21040648ebba13031604f516a6840a\" returns successfully" Oct 2 19:22:35.643554 kubelet[1860]: I1002 19:22:35.643524 1860 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:22:35.648364 kubelet[1860]: E1002 19:22:35.648329 1860 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648364 kubelet[1860]: E1002 19:22:35.648363 1860 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648364 kubelet[1860]: E1002 19:22:35.648371 1860 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648606 kubelet[1860]: E1002 19:22:35.648379 1860 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648606 kubelet[1860]: I1002 19:22:35.648416 1860 memory_manager.go:345] "RemoveStaleState removing state" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648606 kubelet[1860]: I1002 19:22:35.648426 1860 memory_manager.go:345] "RemoveStaleState removing state" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648606 kubelet[1860]: I1002 19:22:35.648435 1860 memory_manager.go:345] "RemoveStaleState removing state" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648606 kubelet[1860]: I1002 19:22:35.648442 1860 memory_manager.go:345] "RemoveStaleState removing state" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648606 kubelet[1860]: E1002 19:22:35.648458 1860 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648606 kubelet[1860]: E1002 19:22:35.648467 1860 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648606 kubelet[1860]: I1002 19:22:35.648484 1860 memory_manager.go:345] "RemoveStaleState removing state" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.648606 kubelet[1860]: I1002 19:22:35.648492 1860 memory_manager.go:345] "RemoveStaleState removing state" podUID="4993a1bc-d12d-4d80-8674-1449084f234b" containerName="mount-cgroup" Oct 2 19:22:35.654353 systemd[1]: Created slice kubepods-burstable-pod1cd2cac6_540c_40db_abec_393fcae56ea3.slice. Oct 2 19:22:35.758082 kubelet[1860]: I1002 19:22:35.757958 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-run\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.758354 kubelet[1860]: I1002 19:22:35.758338 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1cd2cac6-540c-40db-abec-393fcae56ea3-clustermesh-secrets\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.758512 kubelet[1860]: I1002 19:22:35.758502 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-host-proc-sys-kernel\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.758646 kubelet[1860]: I1002 19:22:35.758637 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-xtables-lock\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.758776 kubelet[1860]: I1002 19:22:35.758767 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48njf\" (UniqueName: \"kubernetes.io/projected/1cd2cac6-540c-40db-abec-393fcae56ea3-kube-api-access-48njf\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.758911 kubelet[1860]: I1002 19:22:35.758902 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1cd2cac6-540c-40db-abec-393fcae56ea3-hubble-tls\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.759039 kubelet[1860]: I1002 19:22:35.759026 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-bpf-maps\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.759188 kubelet[1860]: I1002 19:22:35.759178 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-cgroup\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.759325 kubelet[1860]: I1002 19:22:35.759317 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-etc-cni-netd\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.759450 kubelet[1860]: I1002 19:22:35.759442 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-lib-modules\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.759592 kubelet[1860]: I1002 19:22:35.759578 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-config-path\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.759767 kubelet[1860]: I1002 19:22:35.759740 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-host-proc-sys-net\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.759767 kubelet[1860]: I1002 19:22:35.759775 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-hostproc\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.759921 kubelet[1860]: I1002 19:22:35.759802 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cni-path\") pod \"cilium-8chnp\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " pod="kube-system/cilium-8chnp" Oct 2 19:22:35.962745 env[1300]: time="2023-10-02T19:22:35.962689035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8chnp,Uid:1cd2cac6-540c-40db-abec-393fcae56ea3,Namespace:kube-system,Attempt:0,}" Oct 2 19:22:35.992230 env[1300]: time="2023-10-02T19:22:35.992154001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:22:35.992230 env[1300]: time="2023-10-02T19:22:35.992194002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:22:35.992230 env[1300]: time="2023-10-02T19:22:35.992208002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:22:35.992689 env[1300]: time="2023-10-02T19:22:35.992628807Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb pid=2458 runtime=io.containerd.runc.v2 Oct 2 19:22:36.005721 systemd[1]: Started cri-containerd-7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb.scope. Oct 2 19:22:36.018000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.037233 kernel: audit: type=1400 audit(1696274556.018:645): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.018000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.018000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.062855 kernel: audit: type=1400 audit(1696274556.018:646): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.062958 kernel: audit: type=1400 audit(1696274556.018:647): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.062986 kernel: audit: type=1400 audit(1696274556.018:648): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.018000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.018000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.089264 kernel: audit: type=1400 audit(1696274556.018:649): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.018000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.101404 kernel: audit: type=1400 audit(1696274556.018:650): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.103136 kernel: audit: type=1400 audit(1696274556.018:651): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.018000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.018000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.126547 kernel: audit: type=1400 audit(1696274556.018:652): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.018000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.035000 audit: BPF prog-id=79 op=LOAD Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000149c48 a2=10 a3=1c items=0 ppid=2458 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:36.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323135353930346231613730396233613161343166643861653834 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001496b0 a2=3c a3=c items=0 ppid=2458 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:36.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323135353930346231613730396233613161343166643861653834 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.036000 audit: BPF prog-id=80 op=LOAD Oct 2 19:22:36.036000 audit[2468]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001499d8 a2=78 a3=c000024640 items=0 ppid=2458 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:36.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323135353930346231613730396233613161343166643861653834 Oct 2 19:22:36.048000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.048000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.048000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.048000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.048000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.048000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.048000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.048000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.048000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.048000 audit: BPF prog-id=81 op=LOAD Oct 2 19:22:36.048000 audit[2468]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000149770 a2=78 a3=c000024688 items=0 ppid=2458 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:36.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323135353930346231613730396233613161343166643861653834 Oct 2 19:22:36.074000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:22:36.075000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:22:36.075000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.075000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.075000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.075000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.075000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.075000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.075000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.075000 audit[2468]: AVC avc: denied { perfmon } for pid=2468 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.075000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.075000 audit[2468]: AVC avc: denied { bpf } for pid=2468 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:36.075000 audit: BPF prog-id=82 op=LOAD Oct 2 19:22:36.075000 audit[2468]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000149c30 a2=78 a3=c000024a98 items=0 ppid=2458 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:36.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761323135353930346231613730396233613161343166643861653834 Oct 2 19:22:36.129142 env[1300]: time="2023-10-02T19:22:36.126910266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8chnp,Uid:1cd2cac6-540c-40db-abec-393fcae56ea3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\"" Oct 2 19:22:36.129990 env[1300]: time="2023-10-02T19:22:36.129953003Z" level=info msg="CreateContainer within sandbox \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:22:36.132896 kubelet[1860]: E1002 19:22:36.132871 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:36.171759 env[1300]: time="2023-10-02T19:22:36.171707319Z" level=info msg="CreateContainer within sandbox \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16\"" Oct 2 19:22:36.172505 env[1300]: time="2023-10-02T19:22:36.172469828Z" level=info msg="StartContainer for \"8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16\"" Oct 2 19:22:36.192980 systemd[1]: Started cri-containerd-8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16.scope. Oct 2 19:22:36.214278 kubelet[1860]: I1002 19:22:36.212882 1860 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4993a1bc-d12d-4d80-8674-1449084f234b path="/var/lib/kubelet/pods/4993a1bc-d12d-4d80-8674-1449084f234b/volumes" Oct 2 19:22:36.213703 systemd[1]: cri-containerd-8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16.scope: Deactivated successfully. Oct 2 19:22:36.272364 env[1300]: time="2023-10-02T19:22:36.272298460Z" level=info msg="shim disconnected" id=8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16 Oct 2 19:22:36.272364 env[1300]: time="2023-10-02T19:22:36.272358861Z" level=warning msg="cleaning up after shim disconnected" id=8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16 namespace=k8s.io Oct 2 19:22:36.272364 env[1300]: time="2023-10-02T19:22:36.272371361Z" level=info msg="cleaning up dead shim" Oct 2 19:22:36.280717 env[1300]: time="2023-10-02T19:22:36.280668563Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2515 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:22:36Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:22:36.281018 env[1300]: time="2023-10-02T19:22:36.280955367Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Oct 2 19:22:36.281278 env[1300]: time="2023-10-02T19:22:36.281229370Z" level=error msg="Failed to pipe stdout of container \"8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16\"" error="reading from a closed fifo" Oct 2 19:22:36.283215 env[1300]: time="2023-10-02T19:22:36.283165294Z" level=error msg="Failed to pipe stderr of container \"8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16\"" error="reading from a closed fifo" Oct 2 19:22:36.290498 env[1300]: time="2023-10-02T19:22:36.289791576Z" level=error msg="StartContainer for \"8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:22:36.290631 kubelet[1860]: E1002 19:22:36.290032 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16" Oct 2 19:22:36.290631 kubelet[1860]: E1002 19:22:36.290175 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:22:36.290631 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:22:36.290631 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:22:36.290850 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-48njf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-8chnp_kube-system(1cd2cac6-540c-40db-abec-393fcae56ea3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:22:36.290966 kubelet[1860]: E1002 19:22:36.290221 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8chnp" podUID=1cd2cac6-540c-40db-abec-393fcae56ea3 Oct 2 19:22:36.618472 env[1300]: time="2023-10-02T19:22:36.618327831Z" level=info msg="StopPodSandbox for \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\"" Oct 2 19:22:36.618472 env[1300]: time="2023-10-02T19:22:36.618401332Z" level=info msg="Container to stop \"8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:22:36.625000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:22:36.626461 systemd[1]: cri-containerd-7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb.scope: Deactivated successfully. Oct 2 19:22:36.630000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:22:36.665712 env[1300]: time="2023-10-02T19:22:36.665646215Z" level=info msg="shim disconnected" id=7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb Oct 2 19:22:36.665712 env[1300]: time="2023-10-02T19:22:36.665706616Z" level=warning msg="cleaning up after shim disconnected" id=7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb namespace=k8s.io Oct 2 19:22:36.665712 env[1300]: time="2023-10-02T19:22:36.665718616Z" level=info msg="cleaning up dead shim" Oct 2 19:22:36.674425 env[1300]: time="2023-10-02T19:22:36.674369423Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2545 runtime=io.containerd.runc.v2\n" Oct 2 19:22:36.674737 env[1300]: time="2023-10-02T19:22:36.674700927Z" level=info msg="TearDown network for sandbox \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\" successfully" Oct 2 19:22:36.674833 env[1300]: time="2023-10-02T19:22:36.674734827Z" level=info msg="StopPodSandbox for \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\" returns successfully" Oct 2 19:22:36.767163 kubelet[1860]: I1002 19:22:36.766545 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-host-proc-sys-kernel\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.767163 kubelet[1860]: I1002 19:22:36.766614 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-bpf-maps\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.767163 kubelet[1860]: I1002 19:22:36.766647 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-etc-cni-netd\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.767163 kubelet[1860]: I1002 19:22:36.766641 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:36.767163 kubelet[1860]: I1002 19:22:36.766681 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-run\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.767163 kubelet[1860]: I1002 19:22:36.766714 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-xtables-lock\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.767708 kubelet[1860]: I1002 19:22:36.766746 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-lib-modules\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.767708 kubelet[1860]: I1002 19:22:36.766731 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:36.767708 kubelet[1860]: I1002 19:22:36.766779 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cni-path\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.767708 kubelet[1860]: I1002 19:22:36.766795 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:36.767708 kubelet[1860]: I1002 19:22:36.766813 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-host-proc-sys-net\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.768327 kubelet[1860]: I1002 19:22:36.766837 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:36.768327 kubelet[1860]: I1002 19:22:36.766708 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:36.768327 kubelet[1860]: I1002 19:22:36.766853 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1cd2cac6-540c-40db-abec-393fcae56ea3-clustermesh-secrets\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.768327 kubelet[1860]: I1002 19:22:36.766870 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cni-path" (OuterVolumeSpecName: "cni-path") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:36.768327 kubelet[1860]: I1002 19:22:36.766893 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48njf\" (UniqueName: \"kubernetes.io/projected/1cd2cac6-540c-40db-abec-393fcae56ea3-kube-api-access-48njf\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.768544 kubelet[1860]: I1002 19:22:36.766941 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1cd2cac6-540c-40db-abec-393fcae56ea3-hubble-tls\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.768544 kubelet[1860]: I1002 19:22:36.766974 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-cgroup\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.768544 kubelet[1860]: I1002 19:22:36.767010 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-config-path\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.768544 kubelet[1860]: I1002 19:22:36.767048 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-hostproc\") pod \"1cd2cac6-540c-40db-abec-393fcae56ea3\" (UID: \"1cd2cac6-540c-40db-abec-393fcae56ea3\") " Oct 2 19:22:36.768544 kubelet[1860]: I1002 19:22:36.767102 1860 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cni-path\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.768544 kubelet[1860]: I1002 19:22:36.767924 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:36.769146 kubelet[1860]: I1002 19:22:36.768828 1860 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-run\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.769146 kubelet[1860]: I1002 19:22:36.768862 1860 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-lib-modules\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.769146 kubelet[1860]: I1002 19:22:36.768879 1860 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-etc-cni-netd\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.769146 kubelet[1860]: I1002 19:22:36.768896 1860 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-host-proc-sys-kernel\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.769146 kubelet[1860]: I1002 19:22:36.768911 1860 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-bpf-maps\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.769146 kubelet[1860]: I1002 19:22:36.768937 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-hostproc" (OuterVolumeSpecName: "hostproc") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:36.769146 kubelet[1860]: I1002 19:22:36.768966 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:36.770150 kubelet[1860]: W1002 19:22:36.769502 1860 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/1cd2cac6-540c-40db-abec-393fcae56ea3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:22:36.771466 kubelet[1860]: I1002 19:22:36.766776 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:36.772261 kubelet[1860]: I1002 19:22:36.772235 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:22:36.772444 kubelet[1860]: I1002 19:22:36.772422 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd2cac6-540c-40db-abec-393fcae56ea3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:22:36.775512 kubelet[1860]: I1002 19:22:36.775485 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd2cac6-540c-40db-abec-393fcae56ea3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:22:36.775613 kubelet[1860]: I1002 19:22:36.775499 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd2cac6-540c-40db-abec-393fcae56ea3-kube-api-access-48njf" (OuterVolumeSpecName: "kube-api-access-48njf") pod "1cd2cac6-540c-40db-abec-393fcae56ea3" (UID: "1cd2cac6-540c-40db-abec-393fcae56ea3"). InnerVolumeSpecName "kube-api-access-48njf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:22:36.869306 kubelet[1860]: I1002 19:22:36.869074 1860 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-xtables-lock\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.869306 kubelet[1860]: I1002 19:22:36.869153 1860 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-host-proc-sys-net\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.869306 kubelet[1860]: I1002 19:22:36.869174 1860 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1cd2cac6-540c-40db-abec-393fcae56ea3-clustermesh-secrets\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.869306 kubelet[1860]: I1002 19:22:36.869199 1860 reconciler.go:399] "Volume detached for volume \"kube-api-access-48njf\" (UniqueName: \"kubernetes.io/projected/1cd2cac6-540c-40db-abec-393fcae56ea3-kube-api-access-48njf\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.869306 kubelet[1860]: I1002 19:22:36.869216 1860 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1cd2cac6-540c-40db-abec-393fcae56ea3-hubble-tls\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.869306 kubelet[1860]: I1002 19:22:36.869231 1860 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-cgroup\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.869306 kubelet[1860]: I1002 19:22:36.869246 1860 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cd2cac6-540c-40db-abec-393fcae56ea3-cilium-config-path\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:36.869306 kubelet[1860]: I1002 19:22:36.869263 1860 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1cd2cac6-540c-40db-abec-393fcae56ea3-hostproc\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:22:37.095852 kubelet[1860]: E1002 19:22:37.095819 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:37.133486 kubelet[1860]: E1002 19:22:37.133341 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:37.140347 systemd[1]: run-containerd-runc-k8s.io-8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16-runc.C8NYpM.mount: Deactivated successfully. Oct 2 19:22:37.140473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16-rootfs.mount: Deactivated successfully. Oct 2 19:22:37.140549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb-rootfs.mount: Deactivated successfully. Oct 2 19:22:37.140618 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb-shm.mount: Deactivated successfully. Oct 2 19:22:37.140698 systemd[1]: var-lib-kubelet-pods-1cd2cac6\x2d540c\x2d40db\x2dabec\x2d393fcae56ea3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:22:37.140775 systemd[1]: var-lib-kubelet-pods-1cd2cac6\x2d540c\x2d40db\x2dabec\x2d393fcae56ea3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d48njf.mount: Deactivated successfully. Oct 2 19:22:37.140854 systemd[1]: var-lib-kubelet-pods-1cd2cac6\x2d540c\x2d40db\x2dabec\x2d393fcae56ea3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:22:37.623414 kubelet[1860]: I1002 19:22:37.623382 1860 scope.go:115] "RemoveContainer" containerID="8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16" Oct 2 19:22:37.627207 systemd[1]: Removed slice kubepods-burstable-pod1cd2cac6_540c_40db_abec_393fcae56ea3.slice. Oct 2 19:22:37.628731 env[1300]: time="2023-10-02T19:22:37.627788435Z" level=info msg="RemoveContainer for \"8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16\"" Oct 2 19:22:37.636399 env[1300]: time="2023-10-02T19:22:37.636361240Z" level=info msg="RemoveContainer for \"8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16\" returns successfully" Oct 2 19:22:38.134049 kubelet[1860]: E1002 19:22:38.133987 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:38.213295 kubelet[1860]: I1002 19:22:38.212921 1860 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1cd2cac6-540c-40db-abec-393fcae56ea3 path="/var/lib/kubelet/pods/1cd2cac6-540c-40db-abec-393fcae56ea3/volumes" Oct 2 19:22:39.134590 kubelet[1860]: E1002 19:22:39.134529 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:39.376878 kubelet[1860]: W1002 19:22:39.376829 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cd2cac6_540c_40db_abec_393fcae56ea3.slice/cri-containerd-8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16.scope WatchSource:0}: container "8fe29b5f891e69abce095a4fb540caee553fdcaac35a7a607e82822699901c16" in namespace "k8s.io": not found Oct 2 19:22:40.081069 kubelet[1860]: I1002 19:22:40.081026 1860 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:22:40.081069 kubelet[1860]: E1002 19:22:40.081084 1860 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="1cd2cac6-540c-40db-abec-393fcae56ea3" containerName="mount-cgroup" Oct 2 19:22:40.081395 kubelet[1860]: I1002 19:22:40.081158 1860 memory_manager.go:345] "RemoveStaleState removing state" podUID="1cd2cac6-540c-40db-abec-393fcae56ea3" containerName="mount-cgroup" Oct 2 19:22:40.086549 systemd[1]: Created slice kubepods-besteffort-podbce93142_870d_48b4_9252_2a9d7b6c47cc.slice. Oct 2 19:22:40.100434 kubelet[1860]: I1002 19:22:40.100407 1860 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:22:40.105282 systemd[1]: Created slice kubepods-burstable-pod34e2ca7d_5877_4e9b_a55a_4663258099e9.slice. Oct 2 19:22:40.135124 kubelet[1860]: E1002 19:22:40.135074 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:40.188586 kubelet[1860]: I1002 19:22:40.188524 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-xtables-lock\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.188586 kubelet[1860]: I1002 19:22:40.188596 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34e2ca7d-5877-4e9b-a55a-4663258099e9-clustermesh-secrets\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.188893 kubelet[1860]: I1002 19:22:40.188661 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-config-path\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.188893 kubelet[1860]: I1002 19:22:40.188700 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-host-proc-sys-kernel\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.188893 kubelet[1860]: I1002 19:22:40.188735 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bce93142-870d-48b4-9252-2a9d7b6c47cc-cilium-config-path\") pod \"cilium-operator-69b677f97c-27png\" (UID: \"bce93142-870d-48b4-9252-2a9d7b6c47cc\") " pod="kube-system/cilium-operator-69b677f97c-27png" Oct 2 19:22:40.188893 kubelet[1860]: I1002 19:22:40.188767 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-hostproc\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.188893 kubelet[1860]: I1002 19:22:40.188806 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-lib-modules\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.188893 kubelet[1860]: I1002 19:22:40.188840 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cni-path\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.189278 kubelet[1860]: I1002 19:22:40.188880 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-ipsec-secrets\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.189278 kubelet[1860]: I1002 19:22:40.188914 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-host-proc-sys-net\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.189278 kubelet[1860]: I1002 19:22:40.188955 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d8pk\" (UniqueName: \"kubernetes.io/projected/34e2ca7d-5877-4e9b-a55a-4663258099e9-kube-api-access-5d8pk\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.189278 kubelet[1860]: I1002 19:22:40.189002 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tskzp\" (UniqueName: \"kubernetes.io/projected/bce93142-870d-48b4-9252-2a9d7b6c47cc-kube-api-access-tskzp\") pod \"cilium-operator-69b677f97c-27png\" (UID: \"bce93142-870d-48b4-9252-2a9d7b6c47cc\") " pod="kube-system/cilium-operator-69b677f97c-27png" Oct 2 19:22:40.189278 kubelet[1860]: I1002 19:22:40.189038 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-cgroup\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.189553 kubelet[1860]: I1002 19:22:40.189075 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-etc-cni-netd\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.189553 kubelet[1860]: I1002 19:22:40.189134 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-bpf-maps\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.189553 kubelet[1860]: I1002 19:22:40.189176 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-run\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.189553 kubelet[1860]: I1002 19:22:40.189212 1860 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34e2ca7d-5877-4e9b-a55a-4663258099e9-hubble-tls\") pod \"cilium-kkgg5\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " pod="kube-system/cilium-kkgg5" Oct 2 19:22:40.689996 env[1300]: time="2023-10-02T19:22:40.689933407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-27png,Uid:bce93142-870d-48b4-9252-2a9d7b6c47cc,Namespace:kube-system,Attempt:0,}" Oct 2 19:22:40.712845 env[1300]: time="2023-10-02T19:22:40.712799781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kkgg5,Uid:34e2ca7d-5877-4e9b-a55a-4663258099e9,Namespace:kube-system,Attempt:0,}" Oct 2 19:22:40.735864 env[1300]: time="2023-10-02T19:22:40.735374952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:22:40.735864 env[1300]: time="2023-10-02T19:22:40.735418952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:22:40.735864 env[1300]: time="2023-10-02T19:22:40.735429852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:22:40.735864 env[1300]: time="2023-10-02T19:22:40.735621455Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18 pid=2579 runtime=io.containerd.runc.v2 Oct 2 19:22:40.750795 systemd[1]: Started cri-containerd-854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18.scope. Oct 2 19:22:40.791073 kernel: kauditd_printk_skb: 51 callbacks suppressed Oct 2 19:22:40.791283 kernel: audit: type=1400 audit(1696274560.770:665): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.770000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.770000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.796279 env[1300]: time="2023-10-02T19:22:40.792593038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:22:40.796279 env[1300]: time="2023-10-02T19:22:40.792687039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:22:40.796279 env[1300]: time="2023-10-02T19:22:40.792710939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:22:40.796279 env[1300]: time="2023-10-02T19:22:40.792855241Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8 pid=2612 runtime=io.containerd.runc.v2 Oct 2 19:22:40.816045 kernel: audit: type=1400 audit(1696274560.770:666): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.816190 kernel: audit: type=1400 audit(1696274560.770:667): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.770000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.770000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.844267 kernel: audit: type=1400 audit(1696274560.770:668): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.837643 systemd[1]: Started cri-containerd-4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8.scope. Oct 2 19:22:40.770000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.861227 kernel: audit: type=1400 audit(1696274560.770:669): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.770000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.876224 kernel: audit: type=1400 audit(1696274560.770:670): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.876319 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:22:40.892008 kernel: audit: type=1400 audit(1696274560.770:671): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.892149 kernel: audit: audit_lost=36 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:22:40.770000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.906728 kernel: audit: type=1400 audit(1696274560.770:672): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.770000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.770000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.790000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.790000 audit: BPF prog-id=83 op=LOAD Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2579 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:40.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346661646536323366396265323939376536386531393935303532 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2579 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:40.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346661646536323366396265323939376536386531393935303532 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit: BPF prog-id=84 op=LOAD Oct 2 19:22:40.802000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000025120 items=0 ppid=2579 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:40.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346661646536323366396265323939376536386531393935303532 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit: BPF prog-id=85 op=LOAD Oct 2 19:22:40.802000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000025168 items=0 ppid=2579 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:40.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346661646536323366396265323939376536386531393935303532 Oct 2 19:22:40.802000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:22:40.802000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { perfmon } for pid=2588 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit[2588]: AVC avc: denied { bpf } for pid=2588 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.802000 audit: BPF prog-id=86 op=LOAD Oct 2 19:22:40.802000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000025578 items=0 ppid=2579 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:40.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346661646536323366396265323939376536386531393935303532 Oct 2 19:22:40.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.874000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.896000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.896000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2612 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:40.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466316634343365313832616566633738333339323331346531336661 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2612 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:40.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466316634343365313832616566633738333339323331346531336661 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit: BPF prog-id=88 op=LOAD Oct 2 19:22:40.911000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000024510 items=0 ppid=2612 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:40.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466316634343365313832616566633738333339323331346531336661 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit: BPF prog-id=89 op=LOAD Oct 2 19:22:40.911000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000024558 items=0 ppid=2612 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:40.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466316634343365313832616566633738333339323331346531336661 Oct 2 19:22:40.911000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:22:40.911000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { perfmon } for pid=2624 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit[2624]: AVC avc: denied { bpf } for pid=2624 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:40.911000 audit: BPF prog-id=90 op=LOAD Oct 2 19:22:40.911000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000024968 items=0 ppid=2612 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:40.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466316634343365313832616566633738333339323331346531336661 Oct 2 19:22:40.931543 env[1300]: time="2023-10-02T19:22:40.931493703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kkgg5,Uid:34e2ca7d-5877-4e9b-a55a-4663258099e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\"" Oct 2 19:22:40.932462 env[1300]: time="2023-10-02T19:22:40.932436414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-27png,Uid:bce93142-870d-48b4-9252-2a9d7b6c47cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\"" Oct 2 19:22:40.934890 env[1300]: time="2023-10-02T19:22:40.934856043Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:22:40.935056 env[1300]: time="2023-10-02T19:22:40.935029945Z" level=info msg="CreateContainer within sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:22:40.989356 env[1300]: time="2023-10-02T19:22:40.989226895Z" level=info msg="CreateContainer within sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\"" Oct 2 19:22:40.990645 env[1300]: time="2023-10-02T19:22:40.990611012Z" level=info msg="StartContainer for \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\"" Oct 2 19:22:41.008168 systemd[1]: Started cri-containerd-9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23.scope. Oct 2 19:22:41.019692 systemd[1]: cri-containerd-9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23.scope: Deactivated successfully. Oct 2 19:22:41.072962 env[1300]: time="2023-10-02T19:22:41.072906092Z" level=info msg="shim disconnected" id=9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23 Oct 2 19:22:41.072962 env[1300]: time="2023-10-02T19:22:41.072964893Z" level=warning msg="cleaning up after shim disconnected" id=9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23 namespace=k8s.io Oct 2 19:22:41.073303 env[1300]: time="2023-10-02T19:22:41.072976193Z" level=info msg="cleaning up dead shim" Oct 2 19:22:41.081588 env[1300]: time="2023-10-02T19:22:41.081530195Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2681 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:22:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:22:41.081871 env[1300]: time="2023-10-02T19:22:41.081809598Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 19:22:41.085234 env[1300]: time="2023-10-02T19:22:41.085185638Z" level=error msg="Failed to pipe stdout of container \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\"" error="reading from a closed fifo" Oct 2 19:22:41.086334 env[1300]: time="2023-10-02T19:22:41.086292951Z" level=error msg="Failed to pipe stderr of container \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\"" error="reading from a closed fifo" Oct 2 19:22:41.093340 env[1300]: time="2023-10-02T19:22:41.093287835Z" level=error msg="StartContainer for \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:22:41.093556 kubelet[1860]: E1002 19:22:41.093528 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23" Oct 2 19:22:41.093678 kubelet[1860]: E1002 19:22:41.093659 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:22:41.093678 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:22:41.093678 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:22:41.093678 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5d8pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:22:41.093908 kubelet[1860]: E1002 19:22:41.093709 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kkgg5" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 Oct 2 19:22:41.135539 kubelet[1860]: E1002 19:22:41.135490 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:41.634331 env[1300]: time="2023-10-02T19:22:41.634283174Z" level=info msg="CreateContainer within sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:22:41.687364 env[1300]: time="2023-10-02T19:22:41.687308505Z" level=info msg="CreateContainer within sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\"" Oct 2 19:22:41.687947 env[1300]: time="2023-10-02T19:22:41.687907812Z" level=info msg="StartContainer for \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\"" Oct 2 19:22:41.709615 systemd[1]: Started cri-containerd-8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f.scope. Oct 2 19:22:41.718950 systemd[1]: cri-containerd-8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f.scope: Deactivated successfully. Oct 2 19:22:41.719295 systemd[1]: Stopped cri-containerd-8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f.scope. Oct 2 19:22:41.735937 env[1300]: time="2023-10-02T19:22:41.735876683Z" level=info msg="shim disconnected" id=8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f Oct 2 19:22:41.736448 env[1300]: time="2023-10-02T19:22:41.735942084Z" level=warning msg="cleaning up after shim disconnected" id=8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f namespace=k8s.io Oct 2 19:22:41.736448 env[1300]: time="2023-10-02T19:22:41.735954984Z" level=info msg="cleaning up dead shim" Oct 2 19:22:41.744778 env[1300]: time="2023-10-02T19:22:41.744725589Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2719 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:22:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:22:41.745069 env[1300]: time="2023-10-02T19:22:41.745006792Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Oct 2 19:22:41.746222 env[1300]: time="2023-10-02T19:22:41.746171506Z" level=error msg="Failed to pipe stdout of container \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\"" error="reading from a closed fifo" Oct 2 19:22:41.746299 env[1300]: time="2023-10-02T19:22:41.746258507Z" level=error msg="Failed to pipe stderr of container \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\"" error="reading from a closed fifo" Oct 2 19:22:41.751011 env[1300]: time="2023-10-02T19:22:41.750964663Z" level=error msg="StartContainer for \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:22:41.751269 kubelet[1860]: E1002 19:22:41.751242 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f" Oct 2 19:22:41.751398 kubelet[1860]: E1002 19:22:41.751376 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:22:41.751398 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:22:41.751398 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:22:41.751398 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5d8pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:22:41.751607 kubelet[1860]: E1002 19:22:41.751425 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kkgg5" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 Oct 2 19:22:42.097010 kubelet[1860]: E1002 19:22:42.096967 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:42.135778 kubelet[1860]: E1002 19:22:42.135716 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:42.437812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f-rootfs.mount: Deactivated successfully. Oct 2 19:22:42.647951 kubelet[1860]: I1002 19:22:42.647579 1860 scope.go:115] "RemoveContainer" containerID="9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23" Oct 2 19:22:42.647951 kubelet[1860]: I1002 19:22:42.647917 1860 scope.go:115] "RemoveContainer" containerID="9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23" Oct 2 19:22:42.649037 env[1300]: time="2023-10-02T19:22:42.649000698Z" level=info msg="RemoveContainer for \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\"" Oct 2 19:22:42.649680 env[1300]: time="2023-10-02T19:22:42.649650206Z" level=info msg="RemoveContainer for \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\"" Oct 2 19:22:42.649831 env[1300]: time="2023-10-02T19:22:42.649741407Z" level=error msg="RemoveContainer for \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\" failed" error="failed to set removing state for container \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\": container is already in removing state" Oct 2 19:22:42.650771 kubelet[1860]: E1002 19:22:42.650024 1860 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\": container is already in removing state" containerID="9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23" Oct 2 19:22:42.650771 kubelet[1860]: E1002 19:22:42.650059 1860 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23": container is already in removing state; Skipping pod "cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9)" Oct 2 19:22:42.650771 kubelet[1860]: E1002 19:22:42.650528 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9)\"" pod="kube-system/cilium-kkgg5" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 Oct 2 19:22:42.658338 env[1300]: time="2023-10-02T19:22:42.658301308Z" level=info msg="RemoveContainer for \"9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23\" returns successfully" Oct 2 19:22:42.786018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441297835.mount: Deactivated successfully. Oct 2 19:22:43.136508 kubelet[1860]: E1002 19:22:43.136383 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:43.650424 kubelet[1860]: E1002 19:22:43.650393 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9)\"" pod="kube-system/cilium-kkgg5" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 Oct 2 19:22:44.132867 env[1300]: time="2023-10-02T19:22:44.132812531Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:22:44.137129 kubelet[1860]: E1002 19:22:44.137064 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:44.141694 env[1300]: time="2023-10-02T19:22:44.141653934Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:22:44.148591 env[1300]: time="2023-10-02T19:22:44.148551714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:22:44.149043 env[1300]: time="2023-10-02T19:22:44.149009420Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\"" Oct 2 19:22:44.151358 env[1300]: time="2023-10-02T19:22:44.151328447Z" level=info msg="CreateContainer within sandbox \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:22:44.182923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125986603.mount: Deactivated successfully. Oct 2 19:22:44.185229 kubelet[1860]: W1002 19:22:44.184974 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e2ca7d_5877_4e9b_a55a_4663258099e9.slice/cri-containerd-9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23.scope WatchSource:0}: container "9ad906b4f9ce0a33d203b2ed0841fc9fb77c24344fcb7ffb73b8d9ad6f519a23" in namespace "k8s.io": not found Oct 2 19:22:44.200034 env[1300]: time="2023-10-02T19:22:44.199987714Z" level=info msg="CreateContainer within sandbox \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\"" Oct 2 19:22:44.200567 env[1300]: time="2023-10-02T19:22:44.200468719Z" level=info msg="StartContainer for \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\"" Oct 2 19:22:44.225586 systemd[1]: Started cri-containerd-c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7.scope. Oct 2 19:22:44.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.238000 audit: BPF prog-id=91 op=LOAD Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00011fc48 a2=10 a3=1c items=0 ppid=2579 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:44.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303166336461376439346331343562396132326332376464326534 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00011f6b0 a2=3c a3=8 items=0 ppid=2579 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:44.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303166336461376439346331343562396132326332376464326534 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit: BPF prog-id=92 op=LOAD Oct 2 19:22:44.239000 audit[2739]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00011f9d8 a2=78 a3=c000395000 items=0 ppid=2579 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:44.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303166336461376439346331343562396132326332376464326534 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit: BPF prog-id=93 op=LOAD Oct 2 19:22:44.239000 audit[2739]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00011f770 a2=78 a3=c000395048 items=0 ppid=2579 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:44.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303166336461376439346331343562396132326332376464326534 Oct 2 19:22:44.239000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:22:44.239000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { perfmon } for pid=2739 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit[2739]: AVC avc: denied { bpf } for pid=2739 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:22:44.239000 audit: BPF prog-id=94 op=LOAD Oct 2 19:22:44.239000 audit[2739]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00011fc30 a2=78 a3=c000395458 items=0 ppid=2579 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:22:44.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337303166336461376439346331343562396132326332376464326534 Oct 2 19:22:44.265295 env[1300]: time="2023-10-02T19:22:44.265233874Z" level=info msg="StartContainer for \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\" returns successfully" Oct 2 19:22:44.279000 audit[2751]: AVC avc: denied { map_create } for pid=2751 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c270,c1006 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c270,c1006 tclass=bpf permissive=0 Oct 2 19:22:44.279000 audit[2751]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0006897d0 a2=48 a3=c0006897c0 items=0 ppid=2579 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c270,c1006 key=(null) Oct 2 19:22:44.279000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:22:45.137584 kubelet[1860]: E1002 19:22:45.137526 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:45.178081 systemd[1]: run-containerd-runc-k8s.io-c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7-runc.zBLF6W.mount: Deactivated successfully. Oct 2 19:22:46.137819 kubelet[1860]: E1002 19:22:46.137759 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:47.098568 kubelet[1860]: E1002 19:22:47.098524 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:47.138810 kubelet[1860]: E1002 19:22:47.138751 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:47.292226 kubelet[1860]: W1002 19:22:47.292178 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e2ca7d_5877_4e9b_a55a_4663258099e9.slice/cri-containerd-8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f.scope WatchSource:0}: task 8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f not found: not found Oct 2 19:22:48.139972 kubelet[1860]: E1002 19:22:48.139912 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:49.140280 kubelet[1860]: E1002 19:22:49.140223 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:50.140466 kubelet[1860]: E1002 19:22:50.140406 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:51.141518 kubelet[1860]: E1002 19:22:51.141460 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:51.974458 kubelet[1860]: E1002 19:22:51.974399 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:52.000674 env[1300]: time="2023-10-02T19:22:52.000623570Z" level=info msg="StopPodSandbox for \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\"" Oct 2 19:22:52.001043 env[1300]: time="2023-10-02T19:22:52.000720071Z" level=info msg="TearDown network for sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" successfully" Oct 2 19:22:52.001043 env[1300]: time="2023-10-02T19:22:52.000763472Z" level=info msg="StopPodSandbox for \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" returns successfully" Oct 2 19:22:52.001523 env[1300]: time="2023-10-02T19:22:52.001487980Z" level=info msg="RemovePodSandbox for \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\"" Oct 2 19:22:52.001654 env[1300]: time="2023-10-02T19:22:52.001522680Z" level=info msg="Forcibly stopping sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\"" Oct 2 19:22:52.001654 env[1300]: time="2023-10-02T19:22:52.001615881Z" level=info msg="TearDown network for sandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" successfully" Oct 2 19:22:52.008871 env[1300]: time="2023-10-02T19:22:52.008831261Z" level=info msg="RemovePodSandbox \"d9bd621a22f64ad483ebfd66cf0ad95abfa1a98732f45696e30fb313fa8b9399\" returns successfully" Oct 2 19:22:52.009302 env[1300]: time="2023-10-02T19:22:52.009265666Z" level=info msg="StopPodSandbox for \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\"" Oct 2 19:22:52.009427 env[1300]: time="2023-10-02T19:22:52.009359167Z" level=info msg="TearDown network for sandbox \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\" successfully" Oct 2 19:22:52.009427 env[1300]: time="2023-10-02T19:22:52.009403667Z" level=info msg="StopPodSandbox for \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\" returns successfully" Oct 2 19:22:52.009751 env[1300]: time="2023-10-02T19:22:52.009721971Z" level=info msg="RemovePodSandbox for \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\"" Oct 2 19:22:52.009886 env[1300]: time="2023-10-02T19:22:52.009841472Z" level=info msg="Forcibly stopping sandbox \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\"" Oct 2 19:22:52.009956 env[1300]: time="2023-10-02T19:22:52.009928773Z" level=info msg="TearDown network for sandbox \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\" successfully" Oct 2 19:22:52.022453 env[1300]: time="2023-10-02T19:22:52.022415011Z" level=info msg="RemovePodSandbox \"7a2155904b1a709b3a1a41fd8ae84724d2376a4b17c34825fe7d06a2136400fb\" returns successfully" Oct 2 19:22:52.099697 kubelet[1860]: E1002 19:22:52.099664 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:52.142181 kubelet[1860]: E1002 19:22:52.142128 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:53.142951 kubelet[1860]: E1002 19:22:53.142883 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:54.143418 kubelet[1860]: E1002 19:22:54.143356 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:54.213003 env[1300]: time="2023-10-02T19:22:54.212937813Z" level=info msg="CreateContainer within sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:22:54.248235 env[1300]: time="2023-10-02T19:22:54.248182698Z" level=info msg="CreateContainer within sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\"" Oct 2 19:22:54.248773 env[1300]: time="2023-10-02T19:22:54.248740804Z" level=info msg="StartContainer for \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\"" Oct 2 19:22:54.272597 systemd[1]: Started cri-containerd-2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd.scope. Oct 2 19:22:54.282313 systemd[1]: cri-containerd-2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd.scope: Deactivated successfully. Oct 2 19:22:54.282669 systemd[1]: Stopped cri-containerd-2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd.scope. Oct 2 19:22:54.286705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd-rootfs.mount: Deactivated successfully. Oct 2 19:22:54.768586 env[1300]: time="2023-10-02T19:22:54.768525574Z" level=info msg="shim disconnected" id=2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd Oct 2 19:22:54.768586 env[1300]: time="2023-10-02T19:22:54.768582974Z" level=warning msg="cleaning up after shim disconnected" id=2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd namespace=k8s.io Oct 2 19:22:54.768586 env[1300]: time="2023-10-02T19:22:54.768593574Z" level=info msg="cleaning up dead shim" Oct 2 19:22:54.776292 env[1300]: time="2023-10-02T19:22:54.776235958Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2798 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:22:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:22:54.776570 env[1300]: time="2023-10-02T19:22:54.776506361Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:22:54.779243 env[1300]: time="2023-10-02T19:22:54.779184090Z" level=error msg="Failed to pipe stdout of container \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\"" error="reading from a closed fifo" Oct 2 19:22:54.779442 env[1300]: time="2023-10-02T19:22:54.779407392Z" level=error msg="Failed to pipe stderr of container \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\"" error="reading from a closed fifo" Oct 2 19:22:54.783573 env[1300]: time="2023-10-02T19:22:54.783524637Z" level=error msg="StartContainer for \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:22:54.783828 kubelet[1860]: E1002 19:22:54.783801 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd" Oct 2 19:22:54.783961 kubelet[1860]: E1002 19:22:54.783932 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:22:54.783961 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:22:54.783961 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:22:54.783961 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5d8pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:22:54.784198 kubelet[1860]: E1002 19:22:54.783981 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kkgg5" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 Oct 2 19:22:55.144293 kubelet[1860]: E1002 19:22:55.144161 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:55.681944 kubelet[1860]: I1002 19:22:55.681910 1860 scope.go:115] "RemoveContainer" containerID="8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f" Oct 2 19:22:55.682394 kubelet[1860]: I1002 19:22:55.682371 1860 scope.go:115] "RemoveContainer" containerID="8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f" Oct 2 19:22:55.684058 env[1300]: time="2023-10-02T19:22:55.683947114Z" level=info msg="RemoveContainer for \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\"" Oct 2 19:22:55.684750 env[1300]: time="2023-10-02T19:22:55.684723822Z" level=info msg="RemoveContainer for \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\"" Oct 2 19:22:55.685217 env[1300]: time="2023-10-02T19:22:55.685168327Z" level=error msg="RemoveContainer for \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\" failed" error="failed to set removing state for container \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\": container is already in removing state" Oct 2 19:22:55.685878 kubelet[1860]: E1002 19:22:55.685856 1860 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\": container is already in removing state" containerID="8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f" Oct 2 19:22:55.686034 kubelet[1860]: E1002 19:22:55.685890 1860 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f": container is already in removing state; Skipping pod "cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9)" Oct 2 19:22:55.686293 kubelet[1860]: E1002 19:22:55.686274 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9)\"" pod="kube-system/cilium-kkgg5" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 Oct 2 19:22:55.695787 env[1300]: time="2023-10-02T19:22:55.695479239Z" level=info msg="RemoveContainer for \"8746038f962341030ddc812bc699428ba8f3fff041301c829741afc80cd3929f\" returns successfully" Oct 2 19:22:56.145241 kubelet[1860]: E1002 19:22:56.145181 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:57.101220 kubelet[1860]: E1002 19:22:57.101183 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:57.146083 kubelet[1860]: E1002 19:22:57.146017 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:57.874069 kubelet[1860]: W1002 19:22:57.874020 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e2ca7d_5877_4e9b_a55a_4663258099e9.slice/cri-containerd-2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd.scope WatchSource:0}: task 2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd not found: not found Oct 2 19:22:58.146539 kubelet[1860]: E1002 19:22:58.146395 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:59.146826 kubelet[1860]: E1002 19:22:59.146762 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:00.147059 kubelet[1860]: E1002 19:23:00.146997 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:01.147313 kubelet[1860]: E1002 19:23:01.147256 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:02.101859 kubelet[1860]: E1002 19:23:02.101824 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:02.148480 kubelet[1860]: E1002 19:23:02.148421 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:03.148708 kubelet[1860]: E1002 19:23:03.148657 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:04.149248 kubelet[1860]: E1002 19:23:04.149189 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:05.149863 kubelet[1860]: E1002 19:23:05.149803 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:06.150855 kubelet[1860]: E1002 19:23:06.150804 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:07.102997 kubelet[1860]: E1002 19:23:07.102956 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:07.151364 kubelet[1860]: E1002 19:23:07.151301 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:07.209899 kubelet[1860]: E1002 19:23:07.209850 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9)\"" pod="kube-system/cilium-kkgg5" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 Oct 2 19:23:08.152256 kubelet[1860]: E1002 19:23:08.152197 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:09.153103 kubelet[1860]: E1002 19:23:09.153040 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:10.153982 kubelet[1860]: E1002 19:23:10.153946 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:11.154569 kubelet[1860]: E1002 19:23:11.154511 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:11.974079 kubelet[1860]: E1002 19:23:11.974020 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:12.104187 kubelet[1860]: E1002 19:23:12.104150 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:12.154919 kubelet[1860]: E1002 19:23:12.154864 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:13.155824 kubelet[1860]: E1002 19:23:13.155767 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:14.156278 kubelet[1860]: E1002 19:23:14.156142 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:15.156636 kubelet[1860]: E1002 19:23:15.156582 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:16.157485 kubelet[1860]: E1002 19:23:16.157424 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:17.105780 kubelet[1860]: E1002 19:23:17.105747 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:17.158421 kubelet[1860]: E1002 19:23:17.158365 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:18.158863 kubelet[1860]: E1002 19:23:18.158809 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:19.159856 kubelet[1860]: E1002 19:23:19.159795 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:20.160539 kubelet[1860]: E1002 19:23:20.160481 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:21.161552 kubelet[1860]: E1002 19:23:21.161500 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:22.106550 kubelet[1860]: E1002 19:23:22.106513 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:22.161772 kubelet[1860]: E1002 19:23:22.161711 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:22.213074 env[1300]: time="2023-10-02T19:23:22.213021564Z" level=info msg="CreateContainer within sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:23:22.238056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3183703414.mount: Deactivated successfully. Oct 2 19:23:22.244883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162957361.mount: Deactivated successfully. Oct 2 19:23:22.256784 env[1300]: time="2023-10-02T19:23:22.256730673Z" level=info msg="CreateContainer within sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307\"" Oct 2 19:23:22.257427 env[1300]: time="2023-10-02T19:23:22.257396679Z" level=info msg="StartContainer for \"819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307\"" Oct 2 19:23:22.276669 systemd[1]: Started cri-containerd-819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307.scope. Oct 2 19:23:22.289554 systemd[1]: cri-containerd-819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307.scope: Deactivated successfully. Oct 2 19:23:22.318912 env[1300]: time="2023-10-02T19:23:22.318848853Z" level=info msg="shim disconnected" id=819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307 Oct 2 19:23:22.318912 env[1300]: time="2023-10-02T19:23:22.318911954Z" level=warning msg="cleaning up after shim disconnected" id=819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307 namespace=k8s.io Oct 2 19:23:22.318912 env[1300]: time="2023-10-02T19:23:22.318923754Z" level=info msg="cleaning up dead shim" Oct 2 19:23:22.327283 env[1300]: time="2023-10-02T19:23:22.327223632Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:23:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2843 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:23:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:23:22.327576 env[1300]: time="2023-10-02T19:23:22.327512234Z" level=error msg="copy shim log" error="read /proc/self/fd/49: file already closed" Oct 2 19:23:22.327805 env[1300]: time="2023-10-02T19:23:22.327753837Z" level=error msg="Failed to pipe stdout of container \"819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307\"" error="reading from a closed fifo" Oct 2 19:23:22.327954 env[1300]: time="2023-10-02T19:23:22.327914138Z" level=error msg="Failed to pipe stderr of container \"819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307\"" error="reading from a closed fifo" Oct 2 19:23:22.332103 env[1300]: time="2023-10-02T19:23:22.332055477Z" level=error msg="StartContainer for \"819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:23:22.332383 kubelet[1860]: E1002 19:23:22.332358 1860 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307" Oct 2 19:23:22.332525 kubelet[1860]: E1002 19:23:22.332506 1860 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:23:22.332525 kubelet[1860]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:23:22.332525 kubelet[1860]: rm /hostbin/cilium-mount Oct 2 19:23:22.332525 kubelet[1860]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5d8pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:23:22.332755 kubelet[1860]: E1002 19:23:22.332559 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kkgg5" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 Oct 2 19:23:22.729753 kubelet[1860]: I1002 19:23:22.729710 1860 scope.go:115] "RemoveContainer" containerID="2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd" Oct 2 19:23:22.730159 kubelet[1860]: I1002 19:23:22.730127 1860 scope.go:115] "RemoveContainer" containerID="2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd" Oct 2 19:23:22.731536 env[1300]: time="2023-10-02T19:23:22.731487410Z" level=info msg="RemoveContainer for \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\"" Oct 2 19:23:22.732093 env[1300]: time="2023-10-02T19:23:22.732056416Z" level=info msg="RemoveContainer for \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\"" Oct 2 19:23:22.732255 env[1300]: time="2023-10-02T19:23:22.732181717Z" level=error msg="RemoveContainer for \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\" failed" error="failed to set removing state for container \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\": container is already in removing state" Oct 2 19:23:22.732428 kubelet[1860]: E1002 19:23:22.732398 1860 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\": container is already in removing state" containerID="2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd" Oct 2 19:23:22.732524 kubelet[1860]: E1002 19:23:22.732437 1860 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd": container is already in removing state; Skipping pod "cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9)" Oct 2 19:23:22.732756 kubelet[1860]: E1002 19:23:22.732726 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9)\"" pod="kube-system/cilium-kkgg5" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 Oct 2 19:23:22.744127 env[1300]: time="2023-10-02T19:23:22.744071928Z" level=info msg="RemoveContainer for \"2725d167ac734928158bb78e2f2294036b83e22ae34177fdd37b82b0cc7cd8fd\" returns successfully" Oct 2 19:23:23.162677 kubelet[1860]: E1002 19:23:23.162539 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:23.235861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307-rootfs.mount: Deactivated successfully. Oct 2 19:23:24.163672 kubelet[1860]: E1002 19:23:24.163617 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:25.164213 kubelet[1860]: E1002 19:23:25.164155 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:25.424894 kubelet[1860]: W1002 19:23:25.424768 1860 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e2ca7d_5877_4e9b_a55a_4663258099e9.slice/cri-containerd-819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307.scope WatchSource:0}: task 819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307 not found: not found Oct 2 19:23:26.164393 kubelet[1860]: E1002 19:23:26.164337 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:27.108037 kubelet[1860]: E1002 19:23:27.107990 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:27.165312 kubelet[1860]: E1002 19:23:27.165261 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:28.165806 kubelet[1860]: E1002 19:23:28.165750 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:29.166483 kubelet[1860]: E1002 19:23:29.166420 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:30.166621 kubelet[1860]: E1002 19:23:30.166556 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:31.167550 kubelet[1860]: E1002 19:23:31.167496 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:31.974385 kubelet[1860]: E1002 19:23:31.974321 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:32.108963 kubelet[1860]: E1002 19:23:32.108933 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:32.167980 kubelet[1860]: E1002 19:23:32.167927 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:33.168342 kubelet[1860]: E1002 19:23:33.168279 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:34.169224 kubelet[1860]: E1002 19:23:34.169158 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:35.170222 kubelet[1860]: E1002 19:23:35.170164 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:35.210287 kubelet[1860]: E1002 19:23:35.210239 1860 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-kkgg5_kube-system(34e2ca7d-5877-4e9b-a55a-4663258099e9)\"" pod="kube-system/cilium-kkgg5" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 Oct 2 19:23:36.170788 kubelet[1860]: E1002 19:23:36.170737 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:37.109834 kubelet[1860]: E1002 19:23:37.109793 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:37.171449 kubelet[1860]: E1002 19:23:37.171390 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:38.172515 kubelet[1860]: E1002 19:23:38.172459 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:39.172680 kubelet[1860]: E1002 19:23:39.172621 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:40.173525 kubelet[1860]: E1002 19:23:40.173466 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:41.174057 kubelet[1860]: E1002 19:23:41.173933 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:41.289872 env[1300]: time="2023-10-02T19:23:41.289816172Z" level=info msg="StopPodSandbox for \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\"" Oct 2 19:23:41.293981 env[1300]: time="2023-10-02T19:23:41.289901073Z" level=info msg="Container to stop \"819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:23:41.292411 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8-shm.mount: Deactivated successfully. Oct 2 19:23:41.300871 systemd[1]: cri-containerd-4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8.scope: Deactivated successfully. Oct 2 19:23:41.307063 kernel: kauditd_printk_skb: 166 callbacks suppressed Oct 2 19:23:41.307187 kernel: audit: type=1334 audit(1696274621.300:719): prog-id=87 op=UNLOAD Oct 2 19:23:41.300000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:23:41.311000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:23:41.319193 kernel: audit: type=1334 audit(1696274621.311:720): prog-id=90 op=UNLOAD Oct 2 19:23:41.335532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8-rootfs.mount: Deactivated successfully. Oct 2 19:23:41.345704 env[1300]: time="2023-10-02T19:23:41.345650553Z" level=info msg="StopContainer for \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\" with timeout 30 (s)" Oct 2 19:23:41.346248 env[1300]: time="2023-10-02T19:23:41.346200458Z" level=info msg="Stop container \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\" with signal terminated" Oct 2 19:23:41.356431 systemd[1]: cri-containerd-c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7.scope: Deactivated successfully. Oct 2 19:23:41.355000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:23:41.363145 kernel: audit: type=1334 audit(1696274621.355:721): prog-id=91 op=UNLOAD Oct 2 19:23:41.364000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:23:41.370141 kernel: audit: type=1334 audit(1696274621.364:722): prog-id=94 op=UNLOAD Oct 2 19:23:41.382503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7-rootfs.mount: Deactivated successfully. Oct 2 19:23:41.395370 env[1300]: time="2023-10-02T19:23:41.395312781Z" level=info msg="shim disconnected" id=c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7 Oct 2 19:23:41.395370 env[1300]: time="2023-10-02T19:23:41.395372382Z" level=warning msg="cleaning up after shim disconnected" id=c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7 namespace=k8s.io Oct 2 19:23:41.395657 env[1300]: time="2023-10-02T19:23:41.395384682Z" level=info msg="cleaning up dead shim" Oct 2 19:23:41.396316 env[1300]: time="2023-10-02T19:23:41.396267990Z" level=info msg="shim disconnected" id=4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8 Oct 2 19:23:41.397157 env[1300]: time="2023-10-02T19:23:41.397109897Z" level=warning msg="cleaning up after shim disconnected" id=4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8 namespace=k8s.io Oct 2 19:23:41.397157 env[1300]: time="2023-10-02T19:23:41.397151497Z" level=info msg="cleaning up dead shim" Oct 2 19:23:41.408454 env[1300]: time="2023-10-02T19:23:41.408400894Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:23:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2895 runtime=io.containerd.runc.v2\n" Oct 2 19:23:41.409284 env[1300]: time="2023-10-02T19:23:41.409246001Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:23:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2899 runtime=io.containerd.runc.v2\n" Oct 2 19:23:41.409566 env[1300]: time="2023-10-02T19:23:41.409533604Z" level=info msg="TearDown network for sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" successfully" Oct 2 19:23:41.409566 env[1300]: time="2023-10-02T19:23:41.409562204Z" level=info msg="StopPodSandbox for \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" returns successfully" Oct 2 19:23:41.413596 env[1300]: time="2023-10-02T19:23:41.413562239Z" level=info msg="StopContainer for \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\" returns successfully" Oct 2 19:23:41.414236 env[1300]: time="2023-10-02T19:23:41.414204844Z" level=info msg="StopPodSandbox for \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\"" Oct 2 19:23:41.414346 env[1300]: time="2023-10-02T19:23:41.414258945Z" level=info msg="Container to stop \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:23:41.416899 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18-shm.mount: Deactivated successfully. Oct 2 19:23:41.424000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:23:41.424865 systemd[1]: cri-containerd-854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18.scope: Deactivated successfully. Oct 2 19:23:41.437102 kernel: audit: type=1334 audit(1696274621.424:723): prog-id=83 op=UNLOAD Oct 2 19:23:41.437275 kernel: audit: type=1334 audit(1696274621.431:724): prog-id=86 op=UNLOAD Oct 2 19:23:41.431000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:23:41.457358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18-rootfs.mount: Deactivated successfully. Oct 2 19:23:41.478783 env[1300]: time="2023-10-02T19:23:41.478728600Z" level=info msg="shim disconnected" id=854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18 Oct 2 19:23:41.479189 env[1300]: time="2023-10-02T19:23:41.479151004Z" level=warning msg="cleaning up after shim disconnected" id=854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18 namespace=k8s.io Oct 2 19:23:41.479189 env[1300]: time="2023-10-02T19:23:41.479187404Z" level=info msg="cleaning up dead shim" Oct 2 19:23:41.488216 env[1300]: time="2023-10-02T19:23:41.488162282Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:23:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2940 runtime=io.containerd.runc.v2\n" Oct 2 19:23:41.488544 env[1300]: time="2023-10-02T19:23:41.488508485Z" level=info msg="TearDown network for sandbox \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\" successfully" Oct 2 19:23:41.488641 env[1300]: time="2023-10-02T19:23:41.488566985Z" level=info msg="StopPodSandbox for \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\" returns successfully" Oct 2 19:23:41.590457 kubelet[1860]: I1002 19:23:41.589460 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34e2ca7d-5877-4e9b-a55a-4663258099e9-clustermesh-secrets\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.590457 kubelet[1860]: I1002 19:23:41.589532 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-lib-modules\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.590457 kubelet[1860]: I1002 19:23:41.589575 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-ipsec-secrets\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.590457 kubelet[1860]: I1002 19:23:41.589605 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-etc-cni-netd\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.590457 kubelet[1860]: I1002 19:23:41.589637 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-run\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.590457 kubelet[1860]: I1002 19:23:41.589672 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-config-path\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.591000 kubelet[1860]: I1002 19:23:41.589703 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cni-path\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.591000 kubelet[1860]: I1002 19:23:41.589736 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d8pk\" (UniqueName: \"kubernetes.io/projected/34e2ca7d-5877-4e9b-a55a-4663258099e9-kube-api-access-5d8pk\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.591000 kubelet[1860]: I1002 19:23:41.589772 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tskzp\" (UniqueName: \"kubernetes.io/projected/bce93142-870d-48b4-9252-2a9d7b6c47cc-kube-api-access-tskzp\") pod \"bce93142-870d-48b4-9252-2a9d7b6c47cc\" (UID: \"bce93142-870d-48b4-9252-2a9d7b6c47cc\") " Oct 2 19:23:41.591000 kubelet[1860]: I1002 19:23:41.589806 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34e2ca7d-5877-4e9b-a55a-4663258099e9-hubble-tls\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.591000 kubelet[1860]: I1002 19:23:41.589850 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-host-proc-sys-kernel\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.591000 kubelet[1860]: I1002 19:23:41.589889 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-host-proc-sys-net\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.591368 kubelet[1860]: I1002 19:23:41.589919 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-cgroup\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.591368 kubelet[1860]: I1002 19:23:41.589949 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-bpf-maps\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.591368 kubelet[1860]: I1002 19:23:41.589992 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-xtables-lock\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.591368 kubelet[1860]: I1002 19:23:41.590026 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bce93142-870d-48b4-9252-2a9d7b6c47cc-cilium-config-path\") pod \"bce93142-870d-48b4-9252-2a9d7b6c47cc\" (UID: \"bce93142-870d-48b4-9252-2a9d7b6c47cc\") " Oct 2 19:23:41.591368 kubelet[1860]: I1002 19:23:41.590056 1860 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-hostproc\") pod \"34e2ca7d-5877-4e9b-a55a-4663258099e9\" (UID: \"34e2ca7d-5877-4e9b-a55a-4663258099e9\") " Oct 2 19:23:41.591368 kubelet[1860]: I1002 19:23:41.590107 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:23:41.591714 kubelet[1860]: I1002 19:23:41.590177 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:23:41.594132 kubelet[1860]: I1002 19:23:41.591843 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:23:41.594132 kubelet[1860]: I1002 19:23:41.591897 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:23:41.594132 kubelet[1860]: I1002 19:23:41.591924 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:23:41.594132 kubelet[1860]: I1002 19:23:41.591949 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:23:41.594132 kubelet[1860]: I1002 19:23:41.591974 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:23:41.594524 kubelet[1860]: W1002 19:23:41.592195 1860 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/bce93142-870d-48b4-9252-2a9d7b6c47cc/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:23:41.595164 kubelet[1860]: I1002 19:23:41.595103 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:23:41.595270 kubelet[1860]: I1002 19:23:41.595180 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:23:41.595270 kubelet[1860]: W1002 19:23:41.595261 1860 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/34e2ca7d-5877-4e9b-a55a-4663258099e9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:23:41.597332 kubelet[1860]: I1002 19:23:41.597301 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:23:41.598269 kubelet[1860]: I1002 19:23:41.598103 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bce93142-870d-48b4-9252-2a9d7b6c47cc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bce93142-870d-48b4-9252-2a9d7b6c47cc" (UID: "bce93142-870d-48b4-9252-2a9d7b6c47cc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:23:41.598643 kubelet[1860]: I1002 19:23:41.598617 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:23:41.599394 kubelet[1860]: I1002 19:23:41.599368 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e2ca7d-5877-4e9b-a55a-4663258099e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:23:41.600041 kubelet[1860]: I1002 19:23:41.600012 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e2ca7d-5877-4e9b-a55a-4663258099e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:23:41.602041 kubelet[1860]: I1002 19:23:41.602012 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e2ca7d-5877-4e9b-a55a-4663258099e9-kube-api-access-5d8pk" (OuterVolumeSpecName: "kube-api-access-5d8pk") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "kube-api-access-5d8pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:23:41.603444 kubelet[1860]: I1002 19:23:41.603415 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce93142-870d-48b4-9252-2a9d7b6c47cc-kube-api-access-tskzp" (OuterVolumeSpecName: "kube-api-access-tskzp") pod "bce93142-870d-48b4-9252-2a9d7b6c47cc" (UID: "bce93142-870d-48b4-9252-2a9d7b6c47cc"). InnerVolumeSpecName "kube-api-access-tskzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:23:41.604693 kubelet[1860]: I1002 19:23:41.604657 1860 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "34e2ca7d-5877-4e9b-a55a-4663258099e9" (UID: "34e2ca7d-5877-4e9b-a55a-4663258099e9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:23:41.690468 kubelet[1860]: I1002 19:23:41.690424 1860 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-config-path\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690468 kubelet[1860]: I1002 19:23:41.690461 1860 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cni-path\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690468 kubelet[1860]: I1002 19:23:41.690475 1860 reconciler.go:399] "Volume detached for volume \"kube-api-access-5d8pk\" (UniqueName: \"kubernetes.io/projected/34e2ca7d-5877-4e9b-a55a-4663258099e9-kube-api-access-5d8pk\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690735 kubelet[1860]: I1002 19:23:41.690488 1860 reconciler.go:399] "Volume detached for volume \"kube-api-access-tskzp\" (UniqueName: \"kubernetes.io/projected/bce93142-870d-48b4-9252-2a9d7b6c47cc-kube-api-access-tskzp\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690735 kubelet[1860]: I1002 19:23:41.690500 1860 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-run\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690735 kubelet[1860]: I1002 19:23:41.690512 1860 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34e2ca7d-5877-4e9b-a55a-4663258099e9-hubble-tls\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690735 kubelet[1860]: I1002 19:23:41.690523 1860 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-host-proc-sys-kernel\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690735 kubelet[1860]: I1002 19:23:41.690534 1860 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-host-proc-sys-net\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690735 kubelet[1860]: I1002 19:23:41.690545 1860 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-xtables-lock\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690735 kubelet[1860]: I1002 19:23:41.690560 1860 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bce93142-870d-48b4-9252-2a9d7b6c47cc-cilium-config-path\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690735 kubelet[1860]: I1002 19:23:41.690573 1860 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-hostproc\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690943 kubelet[1860]: I1002 19:23:41.690584 1860 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-cgroup\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690943 kubelet[1860]: I1002 19:23:41.690595 1860 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-bpf-maps\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690943 kubelet[1860]: I1002 19:23:41.690607 1860 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34e2ca7d-5877-4e9b-a55a-4663258099e9-clustermesh-secrets\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690943 kubelet[1860]: I1002 19:23:41.690619 1860 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-lib-modules\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690943 kubelet[1860]: I1002 19:23:41.690631 1860 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/34e2ca7d-5877-4e9b-a55a-4663258099e9-cilium-ipsec-secrets\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.690943 kubelet[1860]: I1002 19:23:41.690642 1860 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34e2ca7d-5877-4e9b-a55a-4663258099e9-etc-cni-netd\") on node \"10.200.8.48\" DevicePath \"\"" Oct 2 19:23:41.766253 kubelet[1860]: I1002 19:23:41.766226 1860 scope.go:115] "RemoveContainer" containerID="c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7" Oct 2 19:23:41.777146 env[1300]: time="2023-10-02T19:23:41.776047363Z" level=info msg="RemoveContainer for \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\"" Oct 2 19:23:41.776561 systemd[1]: Removed slice kubepods-burstable-pod34e2ca7d_5877_4e9b_a55a_4663258099e9.slice. Oct 2 19:23:41.779889 systemd[1]: Removed slice kubepods-besteffort-podbce93142_870d_48b4_9252_2a9d7b6c47cc.slice. Oct 2 19:23:41.790072 env[1300]: time="2023-10-02T19:23:41.790025483Z" level=info msg="RemoveContainer for \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\" returns successfully" Oct 2 19:23:41.790293 kubelet[1860]: I1002 19:23:41.790272 1860 scope.go:115] "RemoveContainer" containerID="c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7" Oct 2 19:23:41.790627 env[1300]: time="2023-10-02T19:23:41.790551188Z" level=error msg="ContainerStatus for \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\": not found" Oct 2 19:23:41.790806 kubelet[1860]: E1002 19:23:41.790782 1860 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\": not found" containerID="c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7" Oct 2 19:23:41.790886 kubelet[1860]: I1002 19:23:41.790826 1860 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7} err="failed to get container status \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"c701f3da7d94c145b9a22c27dd2e4999c934b0ea14f90cfa5212895aff3ef2d7\": not found" Oct 2 19:23:41.790886 kubelet[1860]: I1002 19:23:41.790841 1860 scope.go:115] "RemoveContainer" containerID="819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307" Oct 2 19:23:41.791933 env[1300]: time="2023-10-02T19:23:41.791904799Z" level=info msg="RemoveContainer for \"819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307\"" Oct 2 19:23:41.800169 env[1300]: time="2023-10-02T19:23:41.800108470Z" level=info msg="RemoveContainer for \"819e5813e165b2206cf8e652402991f9161d8b7a1ca79c0ff4c99d87e9456307\" returns successfully" Oct 2 19:23:42.111607 kubelet[1860]: E1002 19:23:42.111474 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:42.174142 kubelet[1860]: E1002 19:23:42.174077 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:42.213273 kubelet[1860]: I1002 19:23:42.213235 1860 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=34e2ca7d-5877-4e9b-a55a-4663258099e9 path="/var/lib/kubelet/pods/34e2ca7d-5877-4e9b-a55a-4663258099e9/volumes" Oct 2 19:23:42.213819 kubelet[1860]: I1002 19:23:42.213795 1860 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bce93142-870d-48b4-9252-2a9d7b6c47cc path="/var/lib/kubelet/pods/bce93142-870d-48b4-9252-2a9d7b6c47cc/volumes" Oct 2 19:23:42.292395 systemd[1]: var-lib-kubelet-pods-34e2ca7d\x2d5877\x2d4e9b\x2da55a\x2d4663258099e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:23:42.292539 systemd[1]: var-lib-kubelet-pods-34e2ca7d\x2d5877\x2d4e9b\x2da55a\x2d4663258099e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5d8pk.mount: Deactivated successfully. Oct 2 19:23:42.292643 systemd[1]: var-lib-kubelet-pods-34e2ca7d\x2d5877\x2d4e9b\x2da55a\x2d4663258099e9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:23:42.292749 systemd[1]: var-lib-kubelet-pods-bce93142\x2d870d\x2d48b4\x2d9252\x2d2a9d7b6c47cc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtskzp.mount: Deactivated successfully. Oct 2 19:23:42.292843 systemd[1]: var-lib-kubelet-pods-34e2ca7d\x2d5877\x2d4e9b\x2da55a\x2d4663258099e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:23:43.174354 kubelet[1860]: E1002 19:23:43.174289 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:44.174495 kubelet[1860]: E1002 19:23:44.174426 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:45.174904 kubelet[1860]: E1002 19:23:45.174842 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:46.175407 kubelet[1860]: E1002 19:23:46.175342 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:47.112499 kubelet[1860]: E1002 19:23:47.112460 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:47.176374 kubelet[1860]: E1002 19:23:47.176312 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:48.177500 kubelet[1860]: E1002 19:23:48.177437 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:49.178097 kubelet[1860]: E1002 19:23:49.178037 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:50.179207 kubelet[1860]: E1002 19:23:50.179145 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:51.179531 kubelet[1860]: E1002 19:23:51.179474 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:51.973885 kubelet[1860]: E1002 19:23:51.973826 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:52.025110 env[1300]: time="2023-10-02T19:23:52.025055676Z" level=info msg="StopPodSandbox for \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\"" Oct 2 19:23:52.025577 env[1300]: time="2023-10-02T19:23:52.025183677Z" level=info msg="TearDown network for sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" successfully" Oct 2 19:23:52.025577 env[1300]: time="2023-10-02T19:23:52.025235878Z" level=info msg="StopPodSandbox for \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" returns successfully" Oct 2 19:23:52.025938 env[1300]: time="2023-10-02T19:23:52.025904483Z" level=info msg="RemovePodSandbox for \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\"" Oct 2 19:23:52.026071 env[1300]: time="2023-10-02T19:23:52.025944884Z" level=info msg="Forcibly stopping sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\"" Oct 2 19:23:52.026071 env[1300]: time="2023-10-02T19:23:52.026027784Z" level=info msg="TearDown network for sandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" successfully" Oct 2 19:23:52.035329 env[1300]: time="2023-10-02T19:23:52.035284261Z" level=info msg="RemovePodSandbox \"4f1f443e182aefc783392314e13fa535c97e738b903afab32469c3447e2ec4d8\" returns successfully" Oct 2 19:23:52.035743 env[1300]: time="2023-10-02T19:23:52.035715365Z" level=info msg="StopPodSandbox for \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\"" Oct 2 19:23:52.035855 env[1300]: time="2023-10-02T19:23:52.035796765Z" level=info msg="TearDown network for sandbox \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\" successfully" Oct 2 19:23:52.035855 env[1300]: time="2023-10-02T19:23:52.035837766Z" level=info msg="StopPodSandbox for \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\" returns successfully" Oct 2 19:23:52.036190 env[1300]: time="2023-10-02T19:23:52.036155768Z" level=info msg="RemovePodSandbox for \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\"" Oct 2 19:23:52.036296 env[1300]: time="2023-10-02T19:23:52.036203269Z" level=info msg="Forcibly stopping sandbox \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\"" Oct 2 19:23:52.036349 env[1300]: time="2023-10-02T19:23:52.036297669Z" level=info msg="TearDown network for sandbox \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\" successfully" Oct 2 19:23:52.045201 env[1300]: time="2023-10-02T19:23:52.045155143Z" level=info msg="RemovePodSandbox \"854fade623f9be2997e68e19950524760f2c81eb32d86fde61c151056112db18\" returns successfully" Oct 2 19:23:52.113215 kubelet[1860]: E1002 19:23:52.113171 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:52.180680 kubelet[1860]: E1002 19:23:52.180629 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:53.181208 kubelet[1860]: E1002 19:23:53.181149 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:54.181743 kubelet[1860]: E1002 19:23:54.181685 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:55.182311 kubelet[1860]: E1002 19:23:55.182249 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:56.182832 kubelet[1860]: E1002 19:23:56.182777 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:57.114959 kubelet[1860]: E1002 19:23:57.114917 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:23:57.183342 kubelet[1860]: E1002 19:23:57.183286 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:58.184172 kubelet[1860]: E1002 19:23:58.184095 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:23:59.184445 kubelet[1860]: E1002 19:23:59.184386 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:24:00.102004 kubelet[1860]: E1002 19:24:00.101925 1860 controller.go:187] failed to update lease, error: Put "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.48?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:24:00.184676 kubelet[1860]: E1002 19:24:00.184610 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:24:01.066440 kubelet[1860]: E1002 19:24:01.066385 1860 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"10.200.8.48\": Get \"https://10.200.8.4:6443/api/v1/nodes/10.200.8.48?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 2 19:24:01.185278 kubelet[1860]: E1002 19:24:01.185211 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:24:02.116032 kubelet[1860]: E1002 19:24:02.115998 1860 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:24:02.185668 kubelet[1860]: E1002 19:24:02.185612 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:24:03.186565 kubelet[1860]: E1002 19:24:03.186501 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:24:04.187374 kubelet[1860]: E1002 19:24:04.187314 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:24:05.188381 kubelet[1860]: E1002 19:24:05.188322 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:24:06.189402 kubelet[1860]: E1002 19:24:06.189340 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"