Oct 2 19:54:12.017686 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:54:12.017719 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:54:12.017733 kernel: BIOS-provided physical RAM map: Oct 2 19:54:12.017743 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 2 19:54:12.017753 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Oct 2 19:54:12.017763 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Oct 2 19:54:12.017778 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Oct 2 19:54:12.017789 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Oct 2 19:54:12.017799 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Oct 2 19:54:12.017809 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Oct 2 19:54:12.017820 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Oct 2 19:54:12.017830 kernel: printk: bootconsole [earlyser0] enabled Oct 2 19:54:12.017840 kernel: NX (Execute Disable) protection: active Oct 2 19:54:12.017851 kernel: efi: EFI v2.70 by Microsoft Oct 2 19:54:12.017866 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5caa98 RNG=0x3ffd1018 Oct 2 19:54:12.017878 kernel: random: crng init done Oct 2 19:54:12.017889 kernel: SMBIOS 3.1.0 present. Oct 2 19:54:12.017901 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 05/09/2022 Oct 2 19:54:12.017912 kernel: Hypervisor detected: Microsoft Hyper-V Oct 2 19:54:12.017939 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Oct 2 19:54:12.017951 kernel: Hyper-V Host Build:20348-10.0-1-0.1462 Oct 2 19:54:12.017962 kernel: Hyper-V: Nested features: 0x1e0101 Oct 2 19:54:12.017976 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Oct 2 19:54:12.017986 kernel: Hyper-V: Using hypercall for remote TLB flush Oct 2 19:54:12.017998 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 2 19:54:12.018009 kernel: tsc: Marking TSC unstable due to running on Hyper-V Oct 2 19:54:12.018021 kernel: tsc: Detected 2593.907 MHz processor Oct 2 19:54:12.018033 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:54:12.018045 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:54:12.018057 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Oct 2 19:54:12.018069 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:54:12.018080 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Oct 2 19:54:12.018094 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Oct 2 19:54:12.018106 kernel: Using GB pages for direct mapping Oct 2 19:54:12.018117 kernel: Secure boot disabled Oct 2 19:54:12.018129 kernel: ACPI: Early table checksum verification disabled Oct 2 19:54:12.018140 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Oct 2 19:54:12.018152 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:54:12.018164 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:54:12.018176 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Oct 2 19:54:12.018194 kernel: ACPI: FACS 0x000000003FFFE000 000040 Oct 2 19:54:12.018207 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:54:12.018219 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:54:12.018231 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:54:12.018243 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:54:12.018254 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:54:12.018269 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:54:12.018281 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 19:54:12.018293 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Oct 2 19:54:12.018306 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Oct 2 19:54:12.018318 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Oct 2 19:54:12.018330 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Oct 2 19:54:12.018342 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Oct 2 19:54:12.018355 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Oct 2 19:54:12.018369 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Oct 2 19:54:12.018381 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Oct 2 19:54:12.018393 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Oct 2 19:54:12.018404 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Oct 2 19:54:12.018416 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 2 19:54:12.018427 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 2 19:54:12.018438 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Oct 2 19:54:12.018450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Oct 2 19:54:12.018462 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Oct 2 19:54:12.018475 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Oct 2 19:54:12.018487 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Oct 2 19:54:12.018499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Oct 2 19:54:12.018510 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Oct 2 19:54:12.018521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Oct 2 19:54:12.018533 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Oct 2 19:54:12.018545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Oct 2 19:54:12.018556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Oct 2 19:54:12.018568 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Oct 2 19:54:12.018582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Oct 2 19:54:12.018593 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Oct 2 19:54:12.018605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Oct 2 19:54:12.018616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Oct 2 19:54:12.018628 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Oct 2 19:54:12.018640 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Oct 2 19:54:12.018651 kernel: Zone ranges: Oct 2 19:54:12.018663 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:54:12.018674 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 2 19:54:12.018687 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Oct 2 19:54:12.018699 kernel: Movable zone start for each node Oct 2 19:54:12.018710 kernel: Early memory node ranges Oct 2 19:54:12.018721 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 2 19:54:12.018733 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Oct 2 19:54:12.018744 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Oct 2 19:54:12.018756 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Oct 2 19:54:12.018767 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Oct 2 19:54:12.018779 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:54:12.018792 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 2 19:54:12.018804 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Oct 2 19:54:12.018815 kernel: ACPI: PM-Timer IO Port: 0x408 Oct 2 19:54:12.018827 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Oct 2 19:54:12.018838 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:54:12.018850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:54:12.018861 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:54:12.018873 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Oct 2 19:54:12.018884 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 2 19:54:12.018898 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Oct 2 19:54:12.018909 kernel: Booting paravirtualized kernel on Hyper-V Oct 2 19:54:12.018921 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:54:12.018959 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Oct 2 19:54:12.018970 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Oct 2 19:54:12.018981 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Oct 2 19:54:12.018991 kernel: pcpu-alloc: [0] 0 1 Oct 2 19:54:12.019002 kernel: Hyper-V: PV spinlocks enabled Oct 2 19:54:12.019012 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:54:12.019028 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Oct 2 19:54:12.019040 kernel: Policy zone: Normal Oct 2 19:54:12.019054 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:54:12.019066 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:54:12.019077 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 2 19:54:12.019090 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:54:12.019101 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:54:12.019112 kernel: Memory: 8081204K/8387460K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 305996K reserved, 0K cma-reserved) Oct 2 19:54:12.019127 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:54:12.019140 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:54:12.019161 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:54:12.019177 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:54:12.019191 kernel: rcu: RCU event tracing is enabled. Oct 2 19:54:12.019203 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:54:12.019217 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:54:12.019228 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:54:12.019240 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:54:12.019251 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:54:12.019263 kernel: Using NULL legacy PIC Oct 2 19:54:12.019280 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Oct 2 19:54:12.019294 kernel: Console: colour dummy device 80x25 Oct 2 19:54:12.019305 kernel: printk: console [tty1] enabled Oct 2 19:54:12.019326 kernel: printk: console [ttyS0] enabled Oct 2 19:54:12.019337 kernel: printk: bootconsole [earlyser0] disabled Oct 2 19:54:12.019352 kernel: ACPI: Core revision 20210730 Oct 2 19:54:12.019364 kernel: Failed to register legacy timer interrupt Oct 2 19:54:12.019376 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:54:12.019391 kernel: Hyper-V: Using IPI hypercalls Oct 2 19:54:12.019409 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Oct 2 19:54:12.019420 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 2 19:54:12.019432 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 2 19:54:12.019443 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:54:12.019455 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:54:12.019467 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:54:12.019482 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:54:12.019494 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 2 19:54:12.019508 kernel: RETBleed: Vulnerable Oct 2 19:54:12.019521 kernel: Speculative Store Bypass: Vulnerable Oct 2 19:54:12.019534 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:54:12.019546 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:54:12.019558 kernel: GDS: Unknown: Dependent on hypervisor status Oct 2 19:54:12.019569 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:54:12.019580 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:54:12.019593 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:54:12.019608 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 2 19:54:12.019621 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 2 19:54:12.019632 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 2 19:54:12.019644 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:54:12.019656 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Oct 2 19:54:12.019668 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Oct 2 19:54:12.019681 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Oct 2 19:54:12.019693 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Oct 2 19:54:12.019705 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:54:12.019717 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:54:12.019730 kernel: LSM: Security Framework initializing Oct 2 19:54:12.019742 kernel: SELinux: Initializing. Oct 2 19:54:12.019757 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:54:12.019769 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:54:12.019781 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Oct 2 19:54:12.019795 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 2 19:54:12.019809 kernel: signal: max sigframe size: 3632 Oct 2 19:54:12.019823 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:54:12.019837 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 2 19:54:12.019851 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:54:12.019866 kernel: x86: Booting SMP configuration: Oct 2 19:54:12.019879 kernel: .... node #0, CPUs: #1 Oct 2 19:54:12.019896 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Oct 2 19:54:12.019910 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 2 19:54:12.019942 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:54:12.019957 kernel: smpboot: Max logical packages: 1 Oct 2 19:54:12.019971 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Oct 2 19:54:12.019984 kernel: devtmpfs: initialized Oct 2 19:54:12.019997 kernel: x86/mm: Memory block size: 128MB Oct 2 19:54:12.020010 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Oct 2 19:54:12.020028 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:54:12.020041 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:54:12.020055 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:54:12.020068 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:54:12.020081 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:54:12.020094 kernel: audit: type=2000 audit(1696276451.024:1): state=initialized audit_enabled=0 res=1 Oct 2 19:54:12.020108 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:54:12.020122 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:54:12.020134 kernel: cpuidle: using governor menu Oct 2 19:54:12.020151 kernel: ACPI: bus type PCI registered Oct 2 19:54:12.020164 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:54:12.020178 kernel: dca service started, version 1.12.1 Oct 2 19:54:12.020191 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:54:12.020204 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:54:12.020218 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:54:12.020231 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:54:12.020244 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:54:12.020258 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:54:12.020274 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:54:12.020287 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:54:12.020300 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:54:12.020313 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:54:12.020327 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:54:12.020340 kernel: ACPI: Interpreter enabled Oct 2 19:54:12.020354 kernel: ACPI: PM: (supports S0 S5) Oct 2 19:54:12.020367 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:54:12.020380 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:54:12.020397 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Oct 2 19:54:12.020410 kernel: iommu: Default domain type: Translated Oct 2 19:54:12.020423 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:54:12.020436 kernel: vgaarb: loaded Oct 2 19:54:12.020449 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:54:12.020463 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:54:12.020476 kernel: PTP clock support registered Oct 2 19:54:12.020489 kernel: Registered efivars operations Oct 2 19:54:12.020502 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:54:12.020515 kernel: PCI: System does not support PCI Oct 2 19:54:12.020532 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Oct 2 19:54:12.020545 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:54:12.020558 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:54:12.020572 kernel: pnp: PnP ACPI init Oct 2 19:54:12.020584 kernel: pnp: PnP ACPI: found 3 devices Oct 2 19:54:12.020598 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:54:12.020611 kernel: NET: Registered PF_INET protocol family Oct 2 19:54:12.020624 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 19:54:12.020640 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 2 19:54:12.020653 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:54:12.020667 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:54:12.020680 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Oct 2 19:54:12.020693 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 2 19:54:12.020706 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 2 19:54:12.020720 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 2 19:54:12.020733 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:54:12.020746 kernel: NET: Registered PF_XDP protocol family Oct 2 19:54:12.020762 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:54:12.020775 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 2 19:54:12.020789 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Oct 2 19:54:12.020802 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 2 19:54:12.020816 kernel: Initialise system trusted keyrings Oct 2 19:54:12.020828 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 2 19:54:12.020842 kernel: Key type asymmetric registered Oct 2 19:54:12.020855 kernel: Asymmetric key parser 'x509' registered Oct 2 19:54:12.020867 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:54:12.020883 kernel: io scheduler mq-deadline registered Oct 2 19:54:12.020896 kernel: io scheduler kyber registered Oct 2 19:54:12.020910 kernel: io scheduler bfq registered Oct 2 19:54:12.020948 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:54:12.020962 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:54:12.020975 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:54:12.020988 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 2 19:54:12.021001 kernel: i8042: PNP: No PS/2 controller found. Oct 2 19:54:12.021162 kernel: rtc_cmos 00:02: registered as rtc0 Oct 2 19:54:12.021286 kernel: rtc_cmos 00:02: setting system clock to 2023-10-02T19:54:11 UTC (1696276451) Oct 2 19:54:12.021408 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Oct 2 19:54:12.021426 kernel: fail to initialize ptp_kvm Oct 2 19:54:12.021441 kernel: intel_pstate: CPU model not supported Oct 2 19:54:12.021455 kernel: efifb: probing for efifb Oct 2 19:54:12.021467 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 2 19:54:12.021480 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 2 19:54:12.021493 kernel: efifb: scrolling: redraw Oct 2 19:54:12.021509 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 2 19:54:12.021521 kernel: Console: switching to colour frame buffer device 128x48 Oct 2 19:54:12.021533 kernel: fb0: EFI VGA frame buffer device Oct 2 19:54:12.021545 kernel: pstore: Registered efi as persistent store backend Oct 2 19:54:12.021557 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:54:12.021569 kernel: Segment Routing with IPv6 Oct 2 19:54:12.021580 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:54:12.021593 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:54:12.021606 kernel: Key type dns_resolver registered Oct 2 19:54:12.021621 kernel: IPI shorthand broadcast: enabled Oct 2 19:54:12.021634 kernel: sched_clock: Marking stable (748797000, 21774100)->(957286300, -186715200) Oct 2 19:54:12.021646 kernel: registered taskstats version 1 Oct 2 19:54:12.021658 kernel: Loading compiled-in X.509 certificates Oct 2 19:54:12.021670 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:54:12.021681 kernel: Key type .fscrypt registered Oct 2 19:54:12.021693 kernel: Key type fscrypt-provisioning registered Oct 2 19:54:12.021705 kernel: pstore: Using crash dump compression: deflate Oct 2 19:54:12.021720 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:54:12.021732 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:54:12.021744 kernel: ima: No architecture policies found Oct 2 19:54:12.021757 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:54:12.021769 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:54:12.021781 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:54:12.021793 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:54:12.021805 kernel: Run /init as init process Oct 2 19:54:12.021818 kernel: with arguments: Oct 2 19:54:12.021831 kernel: /init Oct 2 19:54:12.021845 kernel: with environment: Oct 2 19:54:12.021857 kernel: HOME=/ Oct 2 19:54:12.021869 kernel: TERM=linux Oct 2 19:54:12.021882 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:54:12.021898 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:54:12.021913 systemd[1]: Detected virtualization microsoft. Oct 2 19:54:12.021938 systemd[1]: Detected architecture x86-64. Oct 2 19:54:12.021959 systemd[1]: Running in initrd. Oct 2 19:54:12.021972 systemd[1]: No hostname configured, using default hostname. Oct 2 19:54:12.021984 systemd[1]: Hostname set to . Oct 2 19:54:12.021998 systemd[1]: Initializing machine ID from random generator. Oct 2 19:54:12.022011 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:54:12.022024 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:54:12.022037 systemd[1]: Reached target cryptsetup.target. Oct 2 19:54:12.022050 systemd[1]: Reached target paths.target. Oct 2 19:54:12.022064 systemd[1]: Reached target slices.target. Oct 2 19:54:12.022080 systemd[1]: Reached target swap.target. Oct 2 19:54:12.022094 systemd[1]: Reached target timers.target. Oct 2 19:54:12.022108 systemd[1]: Listening on iscsid.socket. Oct 2 19:54:12.022121 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:54:12.022134 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:54:12.022148 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:54:12.022162 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:54:12.022178 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:54:12.022191 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:54:12.022204 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:54:12.022217 systemd[1]: Reached target sockets.target. Oct 2 19:54:12.022231 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:54:12.022245 systemd[1]: Finished network-cleanup.service. Oct 2 19:54:12.022258 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:54:12.022271 systemd[1]: Starting systemd-journald.service... Oct 2 19:54:12.022285 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:54:12.022302 systemd[1]: Starting systemd-resolved.service... Oct 2 19:54:12.022316 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:54:12.022334 systemd-journald[183]: Journal started Oct 2 19:54:12.022401 systemd-journald[183]: Runtime Journal (/run/log/journal/2d54a30d06c94285bed0aae93267b6b2) is 8.0M, max 159.0M, 151.0M free. Oct 2 19:54:12.017599 systemd-modules-load[184]: Inserted module 'overlay' Oct 2 19:54:12.039340 systemd[1]: Started systemd-journald.service. Oct 2 19:54:12.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.039758 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:54:12.063855 kernel: audit: type=1130 audit(1696276452.038:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.053353 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:54:12.053659 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:54:12.054718 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:54:12.055430 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:54:12.108675 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:54:12.108703 kernel: audit: type=1130 audit(1696276452.053:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.108722 kernel: Bridge firewalling registered Oct 2 19:54:12.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.071139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:54:12.108777 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:54:12.111782 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:54:12.121327 systemd-resolved[185]: Positive Trust Anchors: Oct 2 19:54:12.123220 systemd-modules-load[184]: Inserted module 'br_netfilter' Oct 2 19:54:12.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.137332 dracut-cmdline[201]: dracut-dracut-053 Oct 2 19:54:12.137332 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:54:12.162057 kernel: audit: type=1130 audit(1696276452.053:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.139022 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:54:12.167442 kernel: SCSI subsystem initialized Oct 2 19:54:12.139078 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:54:12.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.142741 systemd-resolved[185]: Defaulting to hostname 'linux'. Oct 2 19:54:12.200377 kernel: audit: type=1130 audit(1696276452.053:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.143817 systemd[1]: Started systemd-resolved.service. Oct 2 19:54:12.146096 systemd[1]: Reached target nss-lookup.target. Oct 2 19:54:12.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.226218 kernel: audit: type=1130 audit(1696276452.086:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.226262 kernel: audit: type=1130 audit(1696276452.110:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.237085 kernel: audit: type=1130 audit(1696276452.145:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.247020 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:54:12.250457 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:54:12.256013 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:54:12.260022 systemd-modules-load[184]: Inserted module 'dm_multipath' Oct 2 19:54:12.260755 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:54:12.284169 kernel: audit: type=1130 audit(1696276452.263:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.284201 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:54:12.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.265747 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:54:12.286678 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:54:12.303006 kernel: audit: type=1130 audit(1696276452.287:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.313946 kernel: iscsi: registered transport (tcp) Oct 2 19:54:12.338020 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:54:12.338059 kernel: QLogic iSCSI HBA Driver Oct 2 19:54:12.367056 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:54:12.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.370208 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:54:12.421947 kernel: raid6: avx512x4 gen() 18479 MB/s Oct 2 19:54:12.441937 kernel: raid6: avx512x4 xor() 8774 MB/s Oct 2 19:54:12.461932 kernel: raid6: avx512x2 gen() 18537 MB/s Oct 2 19:54:12.481936 kernel: raid6: avx512x2 xor() 28858 MB/s Oct 2 19:54:12.500935 kernel: raid6: avx512x1 gen() 18647 MB/s Oct 2 19:54:12.520932 kernel: raid6: avx512x1 xor() 26187 MB/s Oct 2 19:54:12.540935 kernel: raid6: avx2x4 gen() 18687 MB/s Oct 2 19:54:12.560932 kernel: raid6: avx2x4 xor() 8066 MB/s Oct 2 19:54:12.580934 kernel: raid6: avx2x2 gen() 18544 MB/s Oct 2 19:54:12.600952 kernel: raid6: avx2x2 xor() 21433 MB/s Oct 2 19:54:12.620936 kernel: raid6: avx2x1 gen() 14043 MB/s Oct 2 19:54:12.639935 kernel: raid6: avx2x1 xor() 18823 MB/s Oct 2 19:54:12.659935 kernel: raid6: sse2x4 gen() 11752 MB/s Oct 2 19:54:12.679932 kernel: raid6: sse2x4 xor() 7265 MB/s Oct 2 19:54:12.698936 kernel: raid6: sse2x2 gen() 12871 MB/s Oct 2 19:54:12.718939 kernel: raid6: sse2x2 xor() 7721 MB/s Oct 2 19:54:12.738936 kernel: raid6: sse2x1 gen() 11613 MB/s Oct 2 19:54:12.761692 kernel: raid6: sse2x1 xor() 5945 MB/s Oct 2 19:54:12.761723 kernel: raid6: using algorithm avx2x4 gen() 18687 MB/s Oct 2 19:54:12.761736 kernel: raid6: .... xor() 8066 MB/s, rmw enabled Oct 2 19:54:12.764699 kernel: raid6: using avx512x2 recovery algorithm Oct 2 19:54:12.782951 kernel: xor: automatically using best checksumming function avx Oct 2 19:54:12.877949 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:54:12.886505 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:54:12.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.890000 audit: BPF prog-id=7 op=LOAD Oct 2 19:54:12.890000 audit: BPF prog-id=8 op=LOAD Oct 2 19:54:12.891841 systemd[1]: Starting systemd-udevd.service... Oct 2 19:54:12.905432 systemd-udevd[384]: Using default interface naming scheme 'v252'. Oct 2 19:54:12.909995 systemd[1]: Started systemd-udevd.service. Oct 2 19:54:12.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.917774 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:54:12.933592 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Oct 2 19:54:12.962990 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:54:12.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:12.968050 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:54:13.002890 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:54:13.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:13.049124 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:54:13.068877 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:54:13.068946 kernel: AES CTR mode by8 optimization enabled Oct 2 19:54:13.086330 kernel: hv_vmbus: Vmbus version:5.2 Oct 2 19:54:13.094023 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 2 19:54:13.108949 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Oct 2 19:54:13.117948 kernel: hv_vmbus: registering driver hv_netvsc Oct 2 19:54:13.126105 kernel: hv_vmbus: registering driver hv_storvsc Oct 2 19:54:13.133494 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:54:13.133523 kernel: scsi host0: storvsc_host_t Oct 2 19:54:13.133557 kernel: scsi host1: storvsc_host_t Oct 2 19:54:13.138636 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Oct 2 19:54:13.142945 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Oct 2 19:54:13.164028 kernel: hv_vmbus: registering driver hid_hyperv Oct 2 19:54:13.177543 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Oct 2 19:54:13.177736 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Oct 2 19:54:13.177749 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Oct 2 19:54:13.177878 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 2 19:54:13.186508 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 2 19:54:13.186816 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Oct 2 19:54:13.186954 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Oct 2 19:54:13.197953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:54:13.202941 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 2 19:54:13.210336 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Oct 2 19:54:13.210479 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:54:13.211945 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Oct 2 19:54:13.264820 kernel: hv_netvsc 000d3ab9-1fe7-000d-3ab9-1fe7000d3ab9 eth0: VF slot 1 added Oct 2 19:54:13.274941 kernel: hv_vmbus: registering driver hv_pci Oct 2 19:54:13.283712 kernel: hv_pci cd392741-d2bc-4da3-9c4b-c0c753c10acc: PCI VMBus probing: Using version 0x10004 Oct 2 19:54:13.283933 kernel: hv_pci cd392741-d2bc-4da3-9c4b-c0c753c10acc: PCI host bridge to bus d2bc:00 Oct 2 19:54:13.292621 kernel: pci_bus d2bc:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Oct 2 19:54:13.292783 kernel: pci_bus d2bc:00: No busn resource found for root bus, will use [bus 00-ff] Oct 2 19:54:13.303307 kernel: pci d2bc:00:02.0: [15b3:1016] type 00 class 0x020000 Oct 2 19:54:13.311915 kernel: pci d2bc:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Oct 2 19:54:13.327972 kernel: pci d2bc:00:02.0: enabling Extended Tags Oct 2 19:54:13.341940 kernel: pci d2bc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d2bc:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Oct 2 19:54:13.350803 kernel: pci_bus d2bc:00: busn_res: [bus 00-ff] end is updated to 00 Oct 2 19:54:13.350973 kernel: pci d2bc:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Oct 2 19:54:13.446949 kernel: mlx5_core d2bc:00:02.0: firmware version: 14.30.1224 Oct 2 19:54:13.515539 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:54:13.539944 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (446) Oct 2 19:54:13.560443 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:54:13.603940 kernel: mlx5_core d2bc:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Oct 2 19:54:13.658140 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:54:13.709003 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:54:13.711839 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:54:13.725013 systemd[1]: Starting disk-uuid.service... Oct 2 19:54:13.738951 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:54:13.760093 kernel: mlx5_core d2bc:00:02.0: Supported tc offload range - chains: 1, prios: 1 Oct 2 19:54:13.765941 kernel: mlx5_core d2bc:00:02.0: mlx5e_tc_post_act_init:40:(pid 7): firmware level support is missing Oct 2 19:54:13.777719 kernel: hv_netvsc 000d3ab9-1fe7-000d-3ab9-1fe7000d3ab9 eth0: VF registering: eth1 Oct 2 19:54:13.777922 kernel: mlx5_core d2bc:00:02.0 eth1: joined to eth0 Oct 2 19:54:13.793941 kernel: mlx5_core d2bc:00:02.0 enP53948s1: renamed from eth1 Oct 2 19:54:14.755579 disk-uuid[556]: The operation has completed successfully. Oct 2 19:54:14.757791 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:54:14.825449 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:54:14.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:14.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:14.825558 systemd[1]: Finished disk-uuid.service. Oct 2 19:54:14.833493 systemd[1]: Starting verity-setup.service... Oct 2 19:54:14.866942 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:54:15.024272 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:54:15.029299 systemd[1]: Finished verity-setup.service. Oct 2 19:54:15.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:15.034083 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:54:15.114955 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:54:15.115100 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:54:15.117111 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:54:15.117870 systemd[1]: Starting ignition-setup.service... Oct 2 19:54:15.125298 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:54:15.148557 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:54:15.148600 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:54:15.148626 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:54:15.196169 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:54:15.204012 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:54:15.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:15.208000 audit: BPF prog-id=9 op=LOAD Oct 2 19:54:15.209143 systemd[1]: Starting systemd-networkd.service... Oct 2 19:54:15.232937 systemd-networkd[829]: lo: Link UP Oct 2 19:54:15.234958 systemd-networkd[829]: lo: Gained carrier Oct 2 19:54:15.237269 systemd-networkd[829]: Enumeration completed Oct 2 19:54:15.238416 systemd[1]: Started systemd-networkd.service. Oct 2 19:54:15.242024 systemd-networkd[829]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:54:15.247268 systemd[1]: Reached target network.target. Oct 2 19:54:15.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:15.387108 systemd[1]: Starting iscsiuio.service... Oct 2 19:54:15.393432 systemd[1]: Finished ignition-setup.service. Oct 2 19:54:15.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:15.399113 systemd[1]: Started iscsiuio.service. Oct 2 19:54:15.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:15.403533 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:54:15.409703 systemd[1]: Starting iscsid.service... Oct 2 19:54:15.414018 iscsid[836]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:54:15.414018 iscsid[836]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:54:15.414018 iscsid[836]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:54:15.414018 iscsid[836]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:54:15.414018 iscsid[836]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:54:15.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:15.443413 iscsid[836]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:54:15.433109 systemd[1]: Started iscsid.service. Oct 2 19:54:15.438247 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:54:15.453676 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:54:15.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:15.455721 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:54:15.459260 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:54:15.471095 kernel: mlx5_core d2bc:00:02.0 enP53948s1: Link up Oct 2 19:54:15.461274 systemd[1]: Reached target remote-fs.target. Oct 2 19:54:15.470049 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:54:15.480142 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:54:15.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:15.836821 kernel: hv_netvsc 000d3ab9-1fe7-000d-3ab9-1fe7000d3ab9 eth0: Data path switched to VF: enP53948s1 Oct 2 19:54:15.837090 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:54:15.836116 systemd-networkd[829]: enP53948s1: Link UP Oct 2 19:54:15.836264 systemd-networkd[829]: eth0: Link UP Oct 2 19:54:15.842683 systemd-networkd[829]: eth0: Gained carrier Oct 2 19:54:15.849387 systemd-networkd[829]: enP53948s1: Gained carrier Oct 2 19:54:15.875021 systemd-networkd[829]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 2 19:54:17.229096 systemd-networkd[829]: eth0: Gained IPv6LL Oct 2 19:54:17.687087 ignition[835]: Ignition 2.14.0 Oct 2 19:54:17.687101 ignition[835]: Stage: fetch-offline Oct 2 19:54:17.687177 ignition[835]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:17.687219 ignition[835]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:54:17.721303 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:54:17.762910 ignition[835]: parsed url from cmdline: "" Oct 2 19:54:17.762942 ignition[835]: no config URL provided Oct 2 19:54:17.762958 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:54:17.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:17.765253 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:54:17.790238 kernel: kauditd_printk_skb: 18 callbacks suppressed Oct 2 19:54:17.790265 kernel: audit: type=1130 audit(1696276457.770:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:17.762975 ignition[835]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:54:17.771468 systemd[1]: Starting ignition-fetch.service... Oct 2 19:54:17.762989 ignition[835]: failed to fetch config: resource requires networking Oct 2 19:54:17.764366 ignition[835]: Ignition finished successfully Oct 2 19:54:17.779815 ignition[855]: Ignition 2.14.0 Oct 2 19:54:17.779821 ignition[855]: Stage: fetch Oct 2 19:54:17.779913 ignition[855]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:17.779944 ignition[855]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:54:17.786254 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:54:17.786538 ignition[855]: parsed url from cmdline: "" Oct 2 19:54:17.786544 ignition[855]: no config URL provided Oct 2 19:54:17.786551 ignition[855]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:54:17.786560 ignition[855]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:54:17.786590 ignition[855]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 2 19:54:17.807284 ignition[855]: GET result: OK Oct 2 19:54:17.808804 ignition[855]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Oct 2 19:54:17.896957 ignition[855]: opening config device: "/dev/sr0" Oct 2 19:54:17.897381 ignition[855]: getting drive status for "/dev/sr0" Oct 2 19:54:17.897454 ignition[855]: drive status: OK Oct 2 19:54:17.897483 ignition[855]: mounting config device Oct 2 19:54:17.897494 ignition[855]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure3370130374" Oct 2 19:54:17.919943 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2023/10/03 00:00 (1000) Oct 2 19:54:17.919753 ignition[855]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure3370130374" Oct 2 19:54:17.919762 ignition[855]: checking for config drive Oct 2 19:54:17.921010 ignition[855]: reading config Oct 2 19:54:17.921374 ignition[855]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure3370130374" Oct 2 19:54:17.924110 ignition[855]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure3370130374" Oct 2 19:54:17.924144 ignition[855]: config has been read from custom data Oct 2 19:54:17.924219 ignition[855]: parsing config with SHA512: 5c86a0440c9c92d5d093815a2d55f34dc40e7990701903acc59139bf1c39e57ef3c958066bf8ed6b073f2143b8362561c11760a2b25521bc7bbcea705a7ee061 Oct 2 19:54:17.932348 systemd[1]: tmp-ignition\x2dazure3370130374.mount: Deactivated successfully. Oct 2 19:54:17.955189 unknown[855]: fetched base config from "system" Oct 2 19:54:17.955961 unknown[855]: fetched base config from "system" Oct 2 19:54:17.956395 ignition[855]: fetch: fetch complete Oct 2 19:54:17.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:17.955967 unknown[855]: fetched user config from "azure" Oct 2 19:54:17.978002 kernel: audit: type=1130 audit(1696276457.961:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:17.956401 ignition[855]: fetch: fetch passed Oct 2 19:54:17.959152 systemd[1]: Finished ignition-fetch.service. Oct 2 19:54:17.956438 ignition[855]: Ignition finished successfully Oct 2 19:54:17.962791 systemd[1]: Starting ignition-kargs.service... Oct 2 19:54:17.985876 ignition[862]: Ignition 2.14.0 Oct 2 19:54:17.985887 ignition[862]: Stage: kargs Oct 2 19:54:17.986129 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:17.986153 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:54:17.992457 systemd[1]: Finished ignition-kargs.service. Oct 2 19:54:17.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:17.988566 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:54:18.012510 kernel: audit: type=1130 audit(1696276457.994:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.006275 systemd[1]: Starting ignition-disks.service... Oct 2 19:54:17.991129 ignition[862]: kargs: kargs passed Oct 2 19:54:17.991174 ignition[862]: Ignition finished successfully Oct 2 19:54:18.013413 ignition[869]: Ignition 2.14.0 Oct 2 19:54:18.013419 ignition[869]: Stage: disks Oct 2 19:54:18.013518 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:18.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.021130 systemd[1]: Finished ignition-disks.service. Oct 2 19:54:18.042136 kernel: audit: type=1130 audit(1696276458.023:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.013536 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:54:18.023554 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:54:18.016998 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:54:18.038292 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:54:18.019416 ignition[869]: disks: disks passed Oct 2 19:54:18.042119 systemd[1]: Reached target local-fs.target. Oct 2 19:54:18.019454 ignition[869]: Ignition finished successfully Oct 2 19:54:18.043994 systemd[1]: Reached target sysinit.target. Oct 2 19:54:18.045910 systemd[1]: Reached target basic.target. Oct 2 19:54:18.050630 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:54:18.100157 systemd-fsck[877]: ROOT: clean, 603/7326000 files, 481068/7359488 blocks Oct 2 19:54:18.105363 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:54:18.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.110373 systemd[1]: Mounting sysroot.mount... Oct 2 19:54:18.123750 kernel: audit: type=1130 audit(1696276458.109:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.135943 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:54:18.136181 systemd[1]: Mounted sysroot.mount. Oct 2 19:54:18.139651 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:54:18.170592 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:54:18.174362 systemd[1]: Starting flatcar-metadata-hostname.service... Oct 2 19:54:18.178328 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:54:18.178373 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:54:18.184231 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:54:18.211785 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:54:18.217051 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:54:18.233950 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (887) Oct 2 19:54:18.239066 initrd-setup-root[892]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:54:18.249009 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:54:18.249035 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:54:18.249049 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:54:18.252178 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:54:18.257788 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:54:18.273007 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:54:18.277728 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:54:18.611120 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:54:18.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.616674 systemd[1]: Starting ignition-mount.service... Oct 2 19:54:18.628838 kernel: audit: type=1130 audit(1696276458.614:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.632134 systemd[1]: Starting sysroot-boot.service... Oct 2 19:54:18.653395 ignition[953]: INFO : Ignition 2.14.0 Oct 2 19:54:18.653395 ignition[953]: INFO : Stage: mount Oct 2 19:54:18.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.654731 systemd[1]: Finished sysroot-boot.service. Oct 2 19:54:18.672641 kernel: audit: type=1130 audit(1696276458.657:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.672672 ignition[953]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:18.672672 ignition[953]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:54:18.683134 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:54:18.683134 ignition[953]: INFO : mount: mount passed Oct 2 19:54:18.683134 ignition[953]: INFO : Ignition finished successfully Oct 2 19:54:18.700956 kernel: audit: type=1130 audit(1696276458.683:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:18.679537 systemd[1]: Finished ignition-mount.service. Oct 2 19:54:18.920911 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:54:18.921025 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:54:19.307348 coreos-metadata[886]: Oct 02 19:54:19.307 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 2 19:54:19.321500 coreos-metadata[886]: Oct 02 19:54:19.321 INFO Fetch successful Oct 2 19:54:19.354586 coreos-metadata[886]: Oct 02 19:54:19.354 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 2 19:54:19.369745 coreos-metadata[886]: Oct 02 19:54:19.369 INFO Fetch successful Oct 2 19:54:19.382980 coreos-metadata[886]: Oct 02 19:54:19.382 INFO wrote hostname ci-3510.3.0-a-d5a4e3b63c to /sysroot/etc/hostname Oct 2 19:54:19.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:19.384601 systemd[1]: Finished flatcar-metadata-hostname.service. Oct 2 19:54:19.404307 kernel: audit: type=1130 audit(1696276459.387:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:19.389956 systemd[1]: Starting ignition-files.service... Oct 2 19:54:19.407480 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:54:19.425280 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (965) Oct 2 19:54:19.425316 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:54:19.425332 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:54:19.428761 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:54:19.436684 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:54:19.449621 ignition[984]: INFO : Ignition 2.14.0 Oct 2 19:54:19.449621 ignition[984]: INFO : Stage: files Oct 2 19:54:19.453825 ignition[984]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:19.453825 ignition[984]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:54:19.462120 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:54:19.475663 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:54:19.478892 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:54:19.478892 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:54:19.509280 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:54:19.512911 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:54:19.520116 unknown[984]: wrote ssh authorized keys file for user: core Oct 2 19:54:19.522716 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:54:19.522716 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:54:19.522716 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 19:54:20.013732 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:54:20.890195 ignition[984]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 19:54:20.898685 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:54:20.898685 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:54:20.898685 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1 Oct 2 19:54:21.187346 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:54:21.245697 ignition[984]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df Oct 2 19:54:21.252698 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:54:21.252698 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:54:21.261114 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:54:21.412979 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:54:23.333785 ignition[984]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5 Oct 2 19:54:23.341544 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:54:23.341544 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:54:23.341544 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:54:23.576578 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:54:27.560060 ignition[984]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54 Oct 2 19:54:27.572828 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:54:27.572828 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:54:27.572828 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:54:27.572828 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:54:27.572828 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:54:27.572828 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Oct 2 19:54:27.572828 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:54:27.572828 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem900456012" Oct 2 19:54:27.572828 ignition[984]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem900456012": device or resource busy Oct 2 19:54:27.572828 ignition[984]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem900456012", trying btrfs: device or resource busy Oct 2 19:54:27.572828 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem900456012" Oct 2 19:54:27.639844 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (984) Oct 2 19:54:27.639870 kernel: audit: type=1130 audit(1696276467.613:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem900456012" Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem900456012" Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem900456012" Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(d): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem689291395" Oct 2 19:54:27.639932 ignition[984]: CRITICAL : files: createFilesystemsFiles: createFiles: op(d): op(e): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem689291395": device or resource busy Oct 2 19:54:27.639932 ignition[984]: ERROR : files: createFilesystemsFiles: createFiles: op(d): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem689291395", trying btrfs: device or resource busy Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem689291395" Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(f): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem689291395" Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(10): [started] unmounting "/mnt/oem689291395" Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(10): [finished] unmounting "/mnt/oem689291395" Oct 2 19:54:27.639932 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:54:27.639932 ignition[984]: INFO : files: op(11): [started] processing unit "waagent.service" Oct 2 19:54:27.639932 ignition[984]: INFO : files: op(11): [finished] processing unit "waagent.service" Oct 2 19:54:27.586289 systemd[1]: mnt-oem900456012.mount: Deactivated successfully. Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(12): [started] processing unit "nvidia.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(12): [finished] processing unit "nvidia.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(15): [started] processing unit "prepare-critools.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(15): op(16): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(15): op(16): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(15): [finished] processing unit "prepare-critools.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(17): [started] setting preset to enabled for "waagent.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(17): [finished] setting preset to enabled for "waagent.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:54:27.711942 ignition[984]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:54:27.711942 ignition[984]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:54:27.711942 ignition[984]: INFO : files: files passed Oct 2 19:54:27.826419 kernel: audit: type=1130 audit(1696276467.745:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.826461 kernel: audit: type=1131 audit(1696276467.745:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.826480 kernel: audit: type=1130 audit(1696276467.771:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.604380 systemd[1]: mnt-oem689291395.mount: Deactivated successfully. Oct 2 19:54:27.829055 ignition[984]: INFO : Ignition finished successfully Oct 2 19:54:27.609595 systemd[1]: Finished ignition-files.service. Oct 2 19:54:27.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.836130 initrd-setup-root-after-ignition[1008]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:54:27.863538 kernel: audit: type=1130 audit(1696276467.835:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.863570 kernel: audit: type=1131 audit(1696276467.835:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.630025 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:54:27.646310 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:54:27.707854 systemd[1]: Starting ignition-quench.service... Oct 2 19:54:27.740300 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:54:27.740400 systemd[1]: Finished ignition-quench.service. Oct 2 19:54:27.768432 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:54:27.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.771945 systemd[1]: Reached target ignition-complete.target. Oct 2 19:54:27.897971 kernel: audit: type=1130 audit(1696276467.883:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.818626 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:54:27.833823 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:54:27.833984 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:54:27.836357 systemd[1]: Reached target initrd-fs.target. Oct 2 19:54:27.858083 systemd[1]: Reached target initrd.target. Oct 2 19:54:27.863576 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:54:27.864476 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:54:27.881112 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:54:27.901038 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:54:27.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.908950 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:54:27.939856 kernel: audit: type=1131 audit(1696276467.923:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.911128 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:54:27.915938 systemd[1]: Stopped target timers.target. Oct 2 19:54:27.919358 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:54:27.919487 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:54:27.935640 systemd[1]: Stopped target initrd.target. Oct 2 19:54:27.940014 systemd[1]: Stopped target basic.target. Oct 2 19:54:27.943679 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:54:27.947293 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:54:27.951413 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:54:27.953580 systemd[1]: Stopped target remote-fs.target. Oct 2 19:54:27.957183 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:54:27.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.960742 systemd[1]: Stopped target sysinit.target. Oct 2 19:54:27.994165 kernel: audit: type=1131 audit(1696276467.978:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.964141 systemd[1]: Stopped target local-fs.target. Oct 2 19:54:27.967891 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:54:27.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.971727 systemd[1]: Stopped target swap.target. Oct 2 19:54:28.013406 kernel: audit: type=1131 audit(1696276467.997:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.975429 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:54:28.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.975565 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:54:28.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.989277 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:54:28.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:27.994247 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:54:28.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.039645 ignition[1022]: INFO : Ignition 2.14.0 Oct 2 19:54:28.039645 ignition[1022]: INFO : Stage: umount Oct 2 19:54:28.039645 ignition[1022]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:28.039645 ignition[1022]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 19:54:27.994379 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:54:28.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.055814 iscsid[836]: iscsid shutting down. Oct 2 19:54:28.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.059171 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 19:54:28.059171 ignition[1022]: INFO : umount: umount passed Oct 2 19:54:28.059171 ignition[1022]: INFO : Ignition finished successfully Oct 2 19:54:28.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.008240 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:54:28.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.008389 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:54:28.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.013471 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:54:28.013604 systemd[1]: Stopped ignition-files.service. Oct 2 19:54:28.017693 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 2 19:54:28.017821 systemd[1]: Stopped flatcar-metadata-hostname.service. Oct 2 19:54:28.022687 systemd[1]: Stopping ignition-mount.service... Oct 2 19:54:28.026173 systemd[1]: Stopping iscsid.service... Oct 2 19:54:28.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.027757 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:54:28.027948 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:54:28.031196 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:54:28.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.033021 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:54:28.033219 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:54:28.053158 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:54:28.053264 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:54:28.057207 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:54:28.057522 systemd[1]: Stopped iscsid.service. Oct 2 19:54:28.059596 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:54:28.059684 systemd[1]: Stopped ignition-mount.service. Oct 2 19:54:28.064151 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:54:28.064263 systemd[1]: Stopped ignition-disks.service. Oct 2 19:54:28.066531 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:54:28.066565 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:54:28.072268 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:54:28.072318 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:54:28.076876 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:54:28.076965 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:54:28.081186 systemd[1]: Stopped target paths.target. Oct 2 19:54:28.085438 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:54:28.089966 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:54:28.092141 systemd[1]: Stopped target slices.target. Oct 2 19:54:28.093799 systemd[1]: Stopped target sockets.target. Oct 2 19:54:28.097803 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:54:28.097850 systemd[1]: Closed iscsid.socket. Oct 2 19:54:28.101166 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:54:28.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.101238 systemd[1]: Stopped ignition-setup.service. Oct 2 19:54:28.106875 systemd[1]: Stopping iscsiuio.service... Oct 2 19:54:28.109653 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:54:28.110156 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:54:28.110255 systemd[1]: Stopped iscsiuio.service. Oct 2 19:54:28.112273 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:54:28.112362 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:54:28.153808 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:54:28.157053 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:54:28.165706 systemd[1]: Stopped target network.target. Oct 2 19:54:28.182830 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:54:28.182886 systemd[1]: Closed iscsiuio.socket. Oct 2 19:54:28.187850 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:54:28.187906 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:54:28.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.193830 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:54:28.197388 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:54:28.201989 systemd-networkd[829]: eth0: DHCPv6 lease lost Oct 2 19:54:28.204712 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:54:28.204813 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:54:28.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.211398 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:54:28.213751 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:54:28.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.217000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:54:28.218070 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:54:28.218121 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:54:28.221000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:54:28.224360 systemd[1]: Stopping network-cleanup.service... Oct 2 19:54:28.227761 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:54:28.227825 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:54:28.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.233921 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:54:28.233984 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:54:28.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.240531 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:54:28.240580 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:54:28.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.246528 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:54:28.250062 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:54:28.252137 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:54:28.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.256560 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:54:28.256644 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:54:28.258961 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:54:28.261291 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:54:28.265176 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:54:28.269069 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:54:28.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.274782 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:54:28.274832 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:54:28.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.280639 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:54:28.280685 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:54:28.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.287393 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:54:28.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.289317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:54:28.289390 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:54:28.303043 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:54:28.305502 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:54:28.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.332943 kernel: hv_netvsc 000d3ab9-1fe7-000d-3ab9-1fe7000d3ab9 eth0: Data path switched from VF: enP53948s1 Oct 2 19:54:28.352342 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:54:28.352464 systemd[1]: Stopped network-cleanup.service. Oct 2 19:54:28.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:28.358842 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:54:28.363597 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:54:28.374590 systemd[1]: Switching root. Oct 2 19:54:28.400181 systemd-journald[183]: Journal stopped Oct 2 19:54:38.844120 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Oct 2 19:54:38.844149 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:54:38.844160 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:54:38.844168 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:54:38.844178 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:54:38.844187 kernel: SELinux: policy capability open_perms=1 Oct 2 19:54:38.844199 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:54:38.844209 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:54:38.844217 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:54:38.844228 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:54:38.844236 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:54:38.844249 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:54:38.844258 systemd[1]: Successfully loaded SELinux policy in 280.601ms. Oct 2 19:54:38.844269 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.658ms. Oct 2 19:54:38.844284 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:54:38.844296 systemd[1]: Detected virtualization microsoft. Oct 2 19:54:38.844306 systemd[1]: Detected architecture x86-64. Oct 2 19:54:38.844316 systemd[1]: Detected first boot. Oct 2 19:54:38.844330 systemd[1]: Hostname set to . Oct 2 19:54:38.844342 systemd[1]: Initializing machine ID from random generator. Oct 2 19:54:38.844354 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:54:38.844364 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:54:38.844377 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:54:38.844390 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:54:38.844404 kernel: kauditd_printk_skb: 41 callbacks suppressed Oct 2 19:54:38.844413 kernel: audit: type=1334 audit(1696276478.381:89): prog-id=12 op=LOAD Oct 2 19:54:38.844425 kernel: audit: type=1334 audit(1696276478.381:90): prog-id=3 op=UNLOAD Oct 2 19:54:38.844435 kernel: audit: type=1334 audit(1696276478.386:91): prog-id=13 op=LOAD Oct 2 19:54:38.844445 kernel: audit: type=1334 audit(1696276478.390:92): prog-id=14 op=LOAD Oct 2 19:54:38.844455 kernel: audit: type=1334 audit(1696276478.390:93): prog-id=4 op=UNLOAD Oct 2 19:54:38.844464 kernel: audit: type=1334 audit(1696276478.390:94): prog-id=5 op=UNLOAD Oct 2 19:54:38.844475 kernel: audit: type=1334 audit(1696276478.395:95): prog-id=15 op=LOAD Oct 2 19:54:38.844483 kernel: audit: type=1334 audit(1696276478.395:96): prog-id=12 op=UNLOAD Oct 2 19:54:38.844496 kernel: audit: type=1334 audit(1696276478.418:97): prog-id=16 op=LOAD Oct 2 19:54:38.844505 kernel: audit: type=1334 audit(1696276478.423:98): prog-id=17 op=LOAD Oct 2 19:54:38.844517 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:54:38.844526 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:54:38.844538 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:54:38.844548 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:54:38.844559 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:54:38.844572 systemd[1]: Created slice system-getty.slice. Oct 2 19:54:38.844587 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:54:38.844596 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:54:38.844609 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:54:38.844619 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:54:38.844632 systemd[1]: Created slice user.slice. Oct 2 19:54:38.844642 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:54:38.844654 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:54:38.844663 systemd[1]: Set up automount boot.automount. Oct 2 19:54:38.844676 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:54:38.844691 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:54:38.844701 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:54:38.844715 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:54:38.844726 systemd[1]: Reached target integritysetup.target. Oct 2 19:54:38.844738 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:54:38.844751 systemd[1]: Reached target remote-fs.target. Oct 2 19:54:38.844763 systemd[1]: Reached target slices.target. Oct 2 19:54:38.844776 systemd[1]: Reached target swap.target. Oct 2 19:54:38.844789 systemd[1]: Reached target torcx.target. Oct 2 19:54:38.844798 systemd[1]: Reached target veritysetup.target. Oct 2 19:54:38.844811 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:54:38.844823 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:54:38.844833 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:54:38.844848 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:54:38.844862 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:54:38.844873 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:54:38.844887 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:54:38.844899 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:54:38.844912 systemd[1]: Mounting media.mount... Oct 2 19:54:38.844932 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:54:38.844947 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:54:38.844964 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:54:38.844980 systemd[1]: Mounting tmp.mount... Oct 2 19:54:38.844994 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:54:38.845009 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:54:38.845026 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:54:38.845044 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:54:38.845061 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:54:38.845076 systemd[1]: Starting modprobe@drm.service... Oct 2 19:54:38.845091 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:54:38.845108 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:54:38.845119 systemd[1]: Starting modprobe@loop.service... Oct 2 19:54:38.845132 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:54:38.845146 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:54:38.845155 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:54:38.845167 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:54:38.845179 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:54:38.845192 systemd[1]: Stopped systemd-journald.service. Oct 2 19:54:38.845203 systemd[1]: Starting systemd-journald.service... Oct 2 19:54:38.845217 kernel: loop: module loaded Oct 2 19:54:38.845228 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:54:38.845238 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:54:38.845250 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:54:38.845263 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:54:38.845273 kernel: fuse: init (API version 7.34) Oct 2 19:54:38.845285 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:54:38.845297 systemd[1]: Stopped verity-setup.service. Oct 2 19:54:38.845308 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:54:38.845323 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:54:38.845335 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:54:38.845350 systemd-journald[1148]: Journal started Oct 2 19:54:38.845397 systemd-journald[1148]: Runtime Journal (/run/log/journal/b4baaf010b564281b81861993002efec) is 8.0M, max 159.0M, 151.0M free. Oct 2 19:54:30.199000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:54:30.738000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:54:30.751000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:54:30.751000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:54:30.751000 audit: BPF prog-id=10 op=LOAD Oct 2 19:54:30.751000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:54:30.751000 audit: BPF prog-id=11 op=LOAD Oct 2 19:54:30.751000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:54:38.381000 audit: BPF prog-id=12 op=LOAD Oct 2 19:54:38.381000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:54:38.386000 audit: BPF prog-id=13 op=LOAD Oct 2 19:54:38.390000 audit: BPF prog-id=14 op=LOAD Oct 2 19:54:38.390000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:54:38.390000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:54:38.395000 audit: BPF prog-id=15 op=LOAD Oct 2 19:54:38.395000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:54:38.418000 audit: BPF prog-id=16 op=LOAD Oct 2 19:54:38.423000 audit: BPF prog-id=17 op=LOAD Oct 2 19:54:38.423000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:54:38.423000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:54:38.427000 audit: BPF prog-id=18 op=LOAD Oct 2 19:54:38.427000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:54:38.431000 audit: BPF prog-id=19 op=LOAD Oct 2 19:54:38.431000 audit: BPF prog-id=20 op=LOAD Oct 2 19:54:38.431000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:54:38.431000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:54:38.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.449000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:54:38.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.759000 audit: BPF prog-id=21 op=LOAD Oct 2 19:54:38.759000 audit: BPF prog-id=22 op=LOAD Oct 2 19:54:38.759000 audit: BPF prog-id=23 op=LOAD Oct 2 19:54:38.759000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:54:38.759000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:54:38.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.837000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:54:38.837000 audit[1148]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff228f4fa0 a2=4000 a3=7fff228f503c items=0 ppid=1 pid=1148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:54:38.837000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:54:38.381103 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:54:31.756825 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:54:38.433028 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:54:31.757248 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:54:31.757271 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:54:31.757308 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:54:31.757319 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:54:31.757369 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:54:31.757385 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:54:31.757595 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:54:31.757645 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:54:31.757663 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:54:31.758108 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:54:31.758146 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:54:31.758169 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:54:31.758185 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:54:31.758204 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:54:31.758220 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:54:37.326333 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:37Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:54:37.326587 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:37Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:54:37.326732 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:37Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:54:37.327305 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:37Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:54:37.327367 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:37Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:54:37.327432 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2023-10-02T19:54:37Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:54:38.858250 systemd[1]: Started systemd-journald.service. Oct 2 19:54:38.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.856343 systemd[1]: Mounted media.mount. Oct 2 19:54:38.858198 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:54:38.860282 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:54:38.862426 systemd[1]: Mounted tmp.mount. Oct 2 19:54:38.864395 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:54:38.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.866722 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:54:38.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.869124 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:54:38.869264 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:54:38.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.871615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:54:38.871752 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:54:38.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.874204 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:54:38.874337 systemd[1]: Finished modprobe@drm.service. Oct 2 19:54:38.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.876546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:54:38.876677 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:54:38.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.879233 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:54:38.879364 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:54:38.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.881789 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:54:38.881937 systemd[1]: Finished modprobe@loop.service. Oct 2 19:54:38.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.884864 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:54:38.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.887407 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:54:38.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.890194 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:54:38.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.893240 systemd[1]: Reached target network-pre.target. Oct 2 19:54:38.897180 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:54:38.901198 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:54:38.903998 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:54:38.916067 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:54:38.919431 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:54:38.921429 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:54:38.922867 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:54:38.925643 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:54:38.926985 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:54:38.932130 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:54:38.940742 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:54:38.943091 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:54:38.964845 systemd-journald[1148]: Time spent on flushing to /var/log/journal/b4baaf010b564281b81861993002efec is 29.784ms for 1168 entries. Oct 2 19:54:38.964845 systemd-journald[1148]: System Journal (/var/log/journal/b4baaf010b564281b81861993002efec) is 8.0M, max 2.6G, 2.6G free. Oct 2 19:54:39.043714 systemd-journald[1148]: Received client request to flush runtime journal. Oct 2 19:54:38.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:39.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:38.975995 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:54:39.044158 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:54:38.979233 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:54:38.981759 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:54:38.983904 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:54:39.005270 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:54:39.044600 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:54:39.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:39.415186 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:54:39.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:40.029000 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:54:40.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:40.031000 audit: BPF prog-id=24 op=LOAD Oct 2 19:54:40.031000 audit: BPF prog-id=25 op=LOAD Oct 2 19:54:40.031000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:54:40.031000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:54:40.033046 systemd[1]: Starting systemd-udevd.service... Oct 2 19:54:40.050972 systemd-udevd[1181]: Using default interface naming scheme 'v252'. Oct 2 19:54:40.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:40.233000 audit: BPF prog-id=26 op=LOAD Oct 2 19:54:40.230120 systemd[1]: Started systemd-udevd.service. Oct 2 19:54:40.235249 systemd[1]: Starting systemd-networkd.service... Oct 2 19:54:40.269060 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:54:40.315000 audit: BPF prog-id=27 op=LOAD Oct 2 19:54:40.315000 audit: BPF prog-id=28 op=LOAD Oct 2 19:54:40.316000 audit: BPF prog-id=29 op=LOAD Oct 2 19:54:40.317827 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:54:40.329104 kernel: hv_utils: Registering HyperV Utility Driver Oct 2 19:54:40.329165 kernel: hv_vmbus: registering driver hv_utils Oct 2 19:54:40.355240 kernel: hv_utils: Heartbeat IC version 3.0 Oct 2 19:54:40.355310 kernel: hv_utils: Shutdown IC version 3.2 Oct 2 19:54:40.355336 kernel: hv_utils: TimeSync IC version 4.0 Oct 2 19:54:40.332000 audit[1195]: AVC avc: denied { confidentiality } for pid=1195 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:54:41.008699 kernel: hv_vmbus: registering driver hv_balloon Oct 2 19:54:41.015540 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 2 19:54:41.029310 systemd[1]: Started systemd-userdbd.service. Oct 2 19:54:41.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:41.053591 kernel: hv_vmbus: registering driver hyperv_fb Oct 2 19:54:41.062686 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 2 19:54:41.062750 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 2 19:54:41.070542 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:54:41.073838 kernel: Console: switching to colour dummy device 80x25 Oct 2 19:54:41.079626 kernel: Console: switching to colour frame buffer device 128x48 Oct 2 19:54:40.332000 audit[1195]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558ed1a932a0 a1=f884 a2=7f05a7e62bc5 a3=5 items=10 ppid=1181 pid=1195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:54:40.332000 audit: CWD cwd="/" Oct 2 19:54:40.332000 audit: PATH item=0 name=(null) inode=15343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:54:40.332000 audit: PATH item=1 name=(null) inode=15344 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:54:40.332000 audit: PATH item=2 name=(null) inode=15343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:54:40.332000 audit: PATH item=3 name=(null) inode=15345 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:54:40.332000 audit: PATH item=4 name=(null) inode=15343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:54:40.332000 audit: PATH item=5 name=(null) inode=15346 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:54:40.332000 audit: PATH item=6 name=(null) inode=15343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:54:40.332000 audit: PATH item=7 name=(null) inode=15347 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:54:40.332000 audit: PATH item=8 name=(null) inode=15343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:54:40.332000 audit: PATH item=9 name=(null) inode=15348 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:54:40.332000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:54:41.256673 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1192) Oct 2 19:54:41.286331 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Oct 2 19:54:41.322401 systemd-networkd[1194]: lo: Link UP Oct 2 19:54:41.322742 systemd-networkd[1194]: lo: Gained carrier Oct 2 19:54:41.323390 systemd-networkd[1194]: Enumeration completed Oct 2 19:54:41.323578 systemd[1]: Started systemd-networkd.service. Oct 2 19:54:41.327389 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:54:41.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:41.346782 systemd-networkd[1194]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:54:41.352980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:54:41.357916 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:54:41.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:41.361907 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:54:41.401560 kernel: mlx5_core d2bc:00:02.0 enP53948s1: Link up Oct 2 19:54:41.439546 kernel: hv_netvsc 000d3ab9-1fe7-000d-3ab9-1fe7000d3ab9 eth0: Data path switched to VF: enP53948s1 Oct 2 19:54:41.440672 systemd-networkd[1194]: enP53948s1: Link UP Oct 2 19:54:41.440959 systemd-networkd[1194]: eth0: Link UP Oct 2 19:54:41.441062 systemd-networkd[1194]: eth0: Gained carrier Oct 2 19:54:41.446973 systemd-networkd[1194]: enP53948s1: Gained carrier Oct 2 19:54:41.480650 systemd-networkd[1194]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 2 19:54:41.614545 lvm[1258]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:54:41.641639 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:54:41.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:41.644175 systemd[1]: Reached target cryptsetup.target. Oct 2 19:54:41.647359 systemd[1]: Starting lvm2-activation.service... Oct 2 19:54:41.651832 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:54:41.671633 systemd[1]: Finished lvm2-activation.service. Oct 2 19:54:41.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:41.674196 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:54:41.676569 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:54:41.676605 systemd[1]: Reached target local-fs.target. Oct 2 19:54:41.678711 systemd[1]: Reached target machines.target. Oct 2 19:54:41.682028 systemd[1]: Starting ldconfig.service... Oct 2 19:54:41.700571 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:54:41.700652 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:54:41.701869 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:54:41.705260 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:54:41.709134 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:54:41.711760 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:54:41.711857 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:54:41.713062 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:54:42.247057 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1261 (bootctl) Oct 2 19:54:42.248703 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:54:42.383334 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:54:42.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:42.754211 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:54:42.764697 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:54:42.765379 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:54:42.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:42.855651 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:54:42.919608 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:54:42.953635 systemd-networkd[1194]: eth0: Gained IPv6LL Oct 2 19:54:42.958377 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:54:42.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:43.107979 systemd-fsck[1269]: fsck.fat 4.2 (2021-01-31) Oct 2 19:54:43.107979 systemd-fsck[1269]: /dev/sda1: 789 files, 115069/258078 clusters Oct 2 19:54:43.110054 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:54:43.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:43.114578 systemd[1]: Mounting boot.mount... Oct 2 19:54:43.136761 systemd[1]: Mounted boot.mount. Oct 2 19:54:43.152021 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:54:43.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.215778 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:54:44.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.220255 systemd[1]: Starting audit-rules.service... Oct 2 19:54:44.221581 kernel: kauditd_printk_skb: 82 callbacks suppressed Oct 2 19:54:44.221646 kernel: audit: type=1130 audit(1696276484.217:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.236466 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:54:44.239881 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:54:44.242000 audit: BPF prog-id=30 op=LOAD Oct 2 19:54:44.244535 systemd[1]: Starting systemd-resolved.service... Oct 2 19:54:44.251738 kernel: audit: type=1334 audit(1696276484.242:167): prog-id=30 op=LOAD Oct 2 19:54:44.251799 kernel: audit: type=1334 audit(1696276484.249:168): prog-id=31 op=LOAD Oct 2 19:54:44.249000 audit: BPF prog-id=31 op=LOAD Oct 2 19:54:44.254254 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:54:44.257858 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:54:44.288436 kernel: audit: type=1127 audit(1696276484.276:169): pid=1287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.276000 audit[1287]: SYSTEM_BOOT pid=1287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.291708 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:54:44.303712 kernel: audit: type=1130 audit(1696276484.292:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.310051 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:54:44.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.312109 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:54:44.321544 kernel: audit: type=1130 audit(1696276484.310:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.379264 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:54:44.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.381478 systemd[1]: Reached target time-set.target. Oct 2 19:54:44.392275 kernel: audit: type=1130 audit(1696276484.380:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.408669 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:54:44.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.421740 kernel: audit: type=1130 audit(1696276484.409:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:44.433473 systemd-resolved[1280]: Positive Trust Anchors: Oct 2 19:54:44.433488 systemd-resolved[1280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:54:44.433545 systemd-resolved[1280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:54:44.457000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:54:44.459889 systemd[1]: Finished audit-rules.service. Oct 2 19:54:44.460742 augenrules[1297]: No rules Oct 2 19:54:44.457000 audit[1297]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffcf5dd550 a2=420 a3=0 items=0 ppid=1276 pid=1297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:54:44.457000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:54:44.473277 systemd-timesyncd[1284]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Oct 2 19:54:44.473778 kernel: audit: type=1305 audit(1696276484.457:174): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:54:44.473805 kernel: audit: type=1300 audit(1696276484.457:174): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffcf5dd550 a2=420 a3=0 items=0 ppid=1276 pid=1297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:54:44.473326 systemd-timesyncd[1284]: Initial clock synchronization to Mon 2023-10-02 19:54:44.462874 UTC. Oct 2 19:54:44.514939 systemd-resolved[1280]: Using system hostname 'ci-3510.3.0-a-d5a4e3b63c'. Oct 2 19:54:44.516468 systemd[1]: Started systemd-resolved.service. Oct 2 19:54:44.518860 systemd[1]: Reached target network.target. Oct 2 19:54:44.520983 systemd[1]: Reached target network-online.target. Oct 2 19:54:44.523176 systemd[1]: Reached target nss-lookup.target. Oct 2 19:54:47.965240 ldconfig[1260]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:54:47.977139 systemd[1]: Finished ldconfig.service. Oct 2 19:54:47.980798 systemd[1]: Starting systemd-update-done.service... Oct 2 19:54:47.999749 systemd[1]: Finished systemd-update-done.service. Oct 2 19:54:48.002547 systemd[1]: Reached target sysinit.target. Oct 2 19:54:48.004952 systemd[1]: Started motdgen.path. Oct 2 19:54:48.006841 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:54:48.009990 systemd[1]: Started logrotate.timer. Oct 2 19:54:48.011938 systemd[1]: Started mdadm.timer. Oct 2 19:54:48.013638 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:54:48.015678 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:54:48.015728 systemd[1]: Reached target paths.target. Oct 2 19:54:48.017555 systemd[1]: Reached target timers.target. Oct 2 19:54:48.019829 systemd[1]: Listening on dbus.socket. Oct 2 19:54:48.023182 systemd[1]: Starting docker.socket... Oct 2 19:54:48.038021 systemd[1]: Listening on sshd.socket. Oct 2 19:54:48.039943 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:54:48.040442 systemd[1]: Listening on docker.socket. Oct 2 19:54:48.042316 systemd[1]: Reached target sockets.target. Oct 2 19:54:48.044101 systemd[1]: Reached target basic.target. Oct 2 19:54:48.045917 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:54:48.045959 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:54:48.047068 systemd[1]: Starting containerd.service... Oct 2 19:54:48.050631 systemd[1]: Starting dbus.service... Oct 2 19:54:48.053566 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:54:48.057253 systemd[1]: Starting extend-filesystems.service... Oct 2 19:54:48.059144 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:54:48.060700 systemd[1]: Starting motdgen.service... Oct 2 19:54:48.067315 systemd[1]: Started nvidia.service. Oct 2 19:54:48.070433 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:54:48.073801 systemd[1]: Starting prepare-critools.service... Oct 2 19:54:48.076752 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:54:48.080002 systemd[1]: Starting sshd-keygen.service... Oct 2 19:54:48.087947 systemd[1]: Starting systemd-logind.service... Oct 2 19:54:48.090222 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:54:48.090309 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:54:48.090891 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:54:48.093730 systemd[1]: Starting update-engine.service... Oct 2 19:54:48.101520 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:54:48.113951 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:54:48.114186 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:54:48.136558 jq[1323]: true Oct 2 19:54:48.144266 jq[1307]: false Oct 2 19:54:48.145777 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:54:48.146011 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:54:48.152873 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:54:48.153073 systemd[1]: Finished motdgen.service. Oct 2 19:54:48.163373 extend-filesystems[1308]: Found sda Oct 2 19:54:48.165510 extend-filesystems[1308]: Found sda1 Oct 2 19:54:48.167981 extend-filesystems[1308]: Found sda2 Oct 2 19:54:48.167981 extend-filesystems[1308]: Found sda3 Oct 2 19:54:48.167981 extend-filesystems[1308]: Found usr Oct 2 19:54:48.167981 extend-filesystems[1308]: Found sda4 Oct 2 19:54:48.167981 extend-filesystems[1308]: Found sda6 Oct 2 19:54:48.167981 extend-filesystems[1308]: Found sda7 Oct 2 19:54:48.167981 extend-filesystems[1308]: Found sda9 Oct 2 19:54:48.167981 extend-filesystems[1308]: Checking size of /dev/sda9 Oct 2 19:54:48.195879 jq[1338]: true Oct 2 19:54:48.228808 systemd-logind[1319]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:54:48.231693 systemd-logind[1319]: New seat seat0. Oct 2 19:54:48.231837 env[1333]: time="2023-10-02T19:54:48.231801263Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:54:48.240957 tar[1326]: ./ Oct 2 19:54:48.240957 tar[1326]: ./macvlan Oct 2 19:54:48.246584 tar[1329]: crictl Oct 2 19:54:48.261951 extend-filesystems[1308]: Old size kept for /dev/sda9 Oct 2 19:54:48.261951 extend-filesystems[1308]: Found sr0 Oct 2 19:54:48.259255 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:54:48.259462 systemd[1]: Finished extend-filesystems.service. Oct 2 19:54:48.290457 dbus-daemon[1306]: [system] SELinux support is enabled Oct 2 19:54:48.290661 systemd[1]: Started dbus.service. Oct 2 19:54:48.295409 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:54:48.295445 systemd[1]: Reached target system-config.target. Oct 2 19:54:48.299715 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:54:48.299748 systemd[1]: Reached target user-config.target. Oct 2 19:54:48.306378 systemd[1]: Started systemd-logind.service. Oct 2 19:54:48.358723 bash[1361]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:54:48.359507 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:54:48.369685 tar[1326]: ./static Oct 2 19:54:48.373074 env[1333]: time="2023-10-02T19:54:48.373027710Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:54:48.373330 env[1333]: time="2023-10-02T19:54:48.373312459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:54:48.375798 env[1333]: time="2023-10-02T19:54:48.375761957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:54:48.375912 env[1333]: time="2023-10-02T19:54:48.375896385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:54:48.376221 env[1333]: time="2023-10-02T19:54:48.376199324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:54:48.376299 env[1333]: time="2023-10-02T19:54:48.376285978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:54:48.376366 env[1333]: time="2023-10-02T19:54:48.376352443Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:54:48.376437 env[1333]: time="2023-10-02T19:54:48.376424605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:54:48.376594 env[1333]: time="2023-10-02T19:54:48.376578623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:54:48.376912 env[1333]: time="2023-10-02T19:54:48.376893955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:54:48.377158 env[1333]: time="2023-10-02T19:54:48.377136226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:54:48.380688 env[1333]: time="2023-10-02T19:54:48.380664252Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:54:48.380852 env[1333]: time="2023-10-02T19:54:48.380832562Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:54:48.380935 env[1333]: time="2023-10-02T19:54:48.380922015Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:54:48.398260 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:54:48.408888 env[1333]: time="2023-10-02T19:54:48.408286472Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409018383Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409046668Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409102838Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409122128Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409196488Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409215778Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409246562Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409265852Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409285441Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409313626Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.409333416Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.410565561Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.410674503Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:54:48.411221 env[1333]: time="2023-10-02T19:54:48.411165642Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:54:48.411990 env[1333]: time="2023-10-02T19:54:48.411200124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.411990 env[1333]: time="2023-10-02T19:54:48.411564630Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:54:48.411990 env[1333]: time="2023-10-02T19:54:48.411648086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.411990 env[1333]: time="2023-10-02T19:54:48.411666776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.412378 env[1333]: time="2023-10-02T19:54:48.411687265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.412378 env[1333]: time="2023-10-02T19:54:48.412173406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.412378 env[1333]: time="2023-10-02T19:54:48.412192996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.412378 env[1333]: time="2023-10-02T19:54:48.412222580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.412378 env[1333]: time="2023-10-02T19:54:48.412237372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.412378 env[1333]: time="2023-10-02T19:54:48.412255363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.412378 env[1333]: time="2023-10-02T19:54:48.412279450Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:54:48.412879 env[1333]: time="2023-10-02T19:54:48.412808869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.412879 env[1333]: time="2023-10-02T19:54:48.412837753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.412879 env[1333]: time="2023-10-02T19:54:48.412856144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.413067 env[1333]: time="2023-10-02T19:54:48.412994070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:54:48.413067 env[1333]: time="2023-10-02T19:54:48.413019557Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:54:48.413067 env[1333]: time="2023-10-02T19:54:48.413037147Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:54:48.413272 env[1333]: time="2023-10-02T19:54:48.413207157Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:54:48.413332 env[1333]: time="2023-10-02T19:54:48.413258430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:54:48.413766 env[1333]: time="2023-10-02T19:54:48.413692299Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.413915581Z" level=info msg="Connect containerd service" Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.413971651Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.414824298Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.414888663Z" level=info msg="Start subscribing containerd event" Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.414946133Z" level=info msg="Start recovering state" Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.415013997Z" level=info msg="Start event monitor" Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.415035186Z" level=info msg="Start snapshots syncer" Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.415046480Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.415056074Z" level=info msg="Start streaming server" Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.415502437Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:54:48.439242 env[1333]: time="2023-10-02T19:54:48.415602984Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:54:48.415749 systemd[1]: Started containerd.service. Oct 2 19:54:48.458064 tar[1326]: ./vlan Oct 2 19:54:48.459569 env[1333]: time="2023-10-02T19:54:48.459515847Z" level=info msg="containerd successfully booted in 0.232465s" Oct 2 19:54:48.539663 tar[1326]: ./portmap Oct 2 19:54:48.617334 tar[1326]: ./host-local Oct 2 19:54:48.661855 tar[1326]: ./vrf Oct 2 19:54:48.699562 tar[1326]: ./bridge Oct 2 19:54:48.747506 tar[1326]: ./tuning Oct 2 19:54:48.779471 update_engine[1320]: I1002 19:54:48.778749 1320 main.cc:92] Flatcar Update Engine starting Oct 2 19:54:48.784203 tar[1326]: ./firewall Oct 2 19:54:48.819407 systemd[1]: Started update-engine.service. Oct 2 19:54:48.821038 update_engine[1320]: I1002 19:54:48.820943 1320 update_check_scheduler.cc:74] Next update check in 11m53s Oct 2 19:54:48.824209 systemd[1]: Started locksmithd.service. Oct 2 19:54:48.834902 tar[1326]: ./host-device Oct 2 19:54:48.875148 tar[1326]: ./sbr Oct 2 19:54:48.912436 tar[1326]: ./loopback Oct 2 19:54:48.946620 tar[1326]: ./dhcp Oct 2 19:54:49.010586 systemd[1]: Finished prepare-critools.service. Oct 2 19:54:49.071062 tar[1326]: ./ptp Oct 2 19:54:49.113115 tar[1326]: ./ipvlan Oct 2 19:54:49.153849 tar[1326]: ./bandwidth Oct 2 19:54:49.228906 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:54:49.607564 sshd_keygen[1328]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:54:49.627084 systemd[1]: Finished sshd-keygen.service. Oct 2 19:54:49.631389 systemd[1]: Starting issuegen.service... Oct 2 19:54:49.634995 systemd[1]: Started waagent.service. Oct 2 19:54:49.641661 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:54:49.641818 systemd[1]: Finished issuegen.service. Oct 2 19:54:49.645338 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:54:49.662635 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:54:49.666282 systemd[1]: Started getty@tty1.service. Oct 2 19:54:49.669792 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:54:49.672345 systemd[1]: Reached target getty.target. Oct 2 19:54:49.674499 systemd[1]: Reached target multi-user.target. Oct 2 19:54:49.678201 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:54:49.687625 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:54:49.687789 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:54:49.690458 systemd[1]: Startup finished in 848ms (firmware) + 20.985s (loader) + 913ms (kernel) + 17.980s (initrd) + 19.356s (userspace) = 1min 84ms. Oct 2 19:54:50.044388 login[1432]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Oct 2 19:54:50.056001 login[1431]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 19:54:50.087815 systemd[1]: Created slice user-500.slice. Oct 2 19:54:50.089292 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:54:50.092584 systemd-logind[1319]: New session 1 of user core. Oct 2 19:54:50.109457 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:54:50.111146 systemd[1]: Starting user@500.service... Oct 2 19:54:50.114688 (systemd)[1435]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:54:50.197600 locksmithd[1410]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:54:50.253013 systemd[1435]: Queued start job for default target default.target. Oct 2 19:54:50.253741 systemd[1435]: Reached target paths.target. Oct 2 19:54:50.253779 systemd[1435]: Reached target sockets.target. Oct 2 19:54:50.253801 systemd[1435]: Reached target timers.target. Oct 2 19:54:50.253820 systemd[1435]: Reached target basic.target. Oct 2 19:54:50.253958 systemd[1]: Started user@500.service. Oct 2 19:54:50.255357 systemd[1]: Started session-1.scope. Oct 2 19:54:50.256052 systemd[1435]: Reached target default.target. Oct 2 19:54:50.256281 systemd[1435]: Startup finished in 135ms. Oct 2 19:54:51.046918 login[1432]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 19:54:51.054555 systemd[1]: Started session-2.scope. Oct 2 19:54:51.055272 systemd-logind[1319]: New session 2 of user core. Oct 2 19:54:54.965510 waagent[1426]: 2023-10-02T19:54:54.965411Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Oct 2 19:54:54.979337 waagent[1426]: 2023-10-02T19:54:54.967928Z INFO Daemon Daemon OS: flatcar 3510.3.0 Oct 2 19:54:54.979337 waagent[1426]: 2023-10-02T19:54:54.968922Z INFO Daemon Daemon Python: 3.9.16 Oct 2 19:54:54.979337 waagent[1426]: 2023-10-02T19:54:54.970306Z INFO Daemon Daemon Run daemon Oct 2 19:54:54.979337 waagent[1426]: 2023-10-02T19:54:54.971309Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.0' Oct 2 19:54:54.983210 waagent[1426]: 2023-10-02T19:54:54.983080Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Oct 2 19:54:54.990797 waagent[1426]: 2023-10-02T19:54:54.990695Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 2 19:54:54.995475 waagent[1426]: 2023-10-02T19:54:54.995412Z INFO Daemon Daemon cloud-init is enabled: False Oct 2 19:54:54.997982 waagent[1426]: 2023-10-02T19:54:54.997922Z INFO Daemon Daemon Using waagent for provisioning Oct 2 19:54:55.001017 waagent[1426]: 2023-10-02T19:54:55.000958Z INFO Daemon Daemon Activate resource disk Oct 2 19:54:55.003341 waagent[1426]: 2023-10-02T19:54:55.003283Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 2 19:54:55.015210 waagent[1426]: 2023-10-02T19:54:55.015127Z INFO Daemon Daemon Found device: None Oct 2 19:54:55.017904 waagent[1426]: 2023-10-02T19:54:55.017841Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 2 19:54:55.021744 waagent[1426]: 2023-10-02T19:54:55.021688Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 2 19:54:55.027417 waagent[1426]: 2023-10-02T19:54:55.027355Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 2 19:54:55.030252 waagent[1426]: 2023-10-02T19:54:55.030194Z INFO Daemon Daemon Running default provisioning handler Oct 2 19:54:55.040248 waagent[1426]: 2023-10-02T19:54:55.040129Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Oct 2 19:54:55.046642 waagent[1426]: 2023-10-02T19:54:55.046541Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 2 19:54:55.054945 waagent[1426]: 2023-10-02T19:54:55.047719Z INFO Daemon Daemon cloud-init is enabled: False Oct 2 19:54:55.054945 waagent[1426]: 2023-10-02T19:54:55.048574Z INFO Daemon Daemon Copying ovf-env.xml Oct 2 19:54:55.067916 waagent[1426]: 2023-10-02T19:54:55.067805Z INFO Daemon Daemon Successfully mounted dvd Oct 2 19:54:55.163936 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 2 19:54:55.181400 waagent[1426]: 2023-10-02T19:54:55.181266Z INFO Daemon Daemon Detect protocol endpoint Oct 2 19:54:55.184075 waagent[1426]: 2023-10-02T19:54:55.184004Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 2 19:54:55.187411 waagent[1426]: 2023-10-02T19:54:55.187316Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 2 19:54:55.191179 waagent[1426]: 2023-10-02T19:54:55.191098Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 2 19:54:55.193937 waagent[1426]: 2023-10-02T19:54:55.193875Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 2 19:54:55.196551 waagent[1426]: 2023-10-02T19:54:55.196473Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 2 19:54:55.287135 waagent[1426]: 2023-10-02T19:54:55.286997Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 2 19:54:55.294504 waagent[1426]: 2023-10-02T19:54:55.288831Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 2 19:54:55.294504 waagent[1426]: 2023-10-02T19:54:55.289591Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 2 19:54:55.711387 waagent[1426]: 2023-10-02T19:54:55.711243Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 2 19:54:55.721464 waagent[1426]: 2023-10-02T19:54:55.721393Z INFO Daemon Daemon Forcing an update of the goal state.. Oct 2 19:54:55.726408 waagent[1426]: 2023-10-02T19:54:55.722697Z INFO Daemon Daemon Fetching goal state [incarnation 1] Oct 2 19:54:55.800745 waagent[1426]: 2023-10-02T19:54:55.800619Z INFO Daemon Daemon Found private key matching thumbprint FE7BDF9F945D1251241BB2AFA730C1085FAD98AC Oct 2 19:54:55.804766 waagent[1426]: 2023-10-02T19:54:55.804691Z INFO Daemon Daemon Certificate with thumbprint CB6EE5B00AEB1C67CB80D8FBF1376CCC4B0728EB has no matching private key. Oct 2 19:54:55.809290 waagent[1426]: 2023-10-02T19:54:55.809221Z INFO Daemon Daemon Fetch goal state completed Oct 2 19:54:55.860626 waagent[1426]: 2023-10-02T19:54:55.860521Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: dceca23e-b6ff-4a0d-b092-f88eb95b2e15 New eTag: 5119025971182490693] Oct 2 19:54:55.866096 waagent[1426]: 2023-10-02T19:54:55.862373Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Oct 2 19:54:55.871084 waagent[1426]: 2023-10-02T19:54:55.871028Z INFO Daemon Daemon Starting provisioning Oct 2 19:54:55.877963 waagent[1426]: 2023-10-02T19:54:55.872487Z INFO Daemon Daemon Handle ovf-env.xml. Oct 2 19:54:55.877963 waagent[1426]: 2023-10-02T19:54:55.873416Z INFO Daemon Daemon Set hostname [ci-3510.3.0-a-d5a4e3b63c] Oct 2 19:54:55.887589 waagent[1426]: 2023-10-02T19:54:55.887467Z INFO Daemon Daemon Publish hostname [ci-3510.3.0-a-d5a4e3b63c] Oct 2 19:54:55.894945 waagent[1426]: 2023-10-02T19:54:55.889114Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 2 19:54:55.894945 waagent[1426]: 2023-10-02T19:54:55.889969Z INFO Daemon Daemon Primary interface is [eth0] Oct 2 19:54:55.903273 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Oct 2 19:54:55.903537 systemd[1]: Stopped systemd-networkd-wait-online.service. Oct 2 19:54:55.903612 systemd[1]: Stopping systemd-networkd-wait-online.service... Oct 2 19:54:55.903948 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:54:55.909565 systemd-networkd[1194]: eth0: DHCPv6 lease lost Oct 2 19:54:55.910870 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:54:55.911060 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:54:55.913270 systemd[1]: Starting systemd-networkd.service... Oct 2 19:54:55.944248 systemd-networkd[1480]: enP53948s1: Link UP Oct 2 19:54:55.944258 systemd-networkd[1480]: enP53948s1: Gained carrier Oct 2 19:54:55.945752 systemd-networkd[1480]: eth0: Link UP Oct 2 19:54:55.945760 systemd-networkd[1480]: eth0: Gained carrier Oct 2 19:54:55.946198 systemd-networkd[1480]: lo: Link UP Oct 2 19:54:55.946207 systemd-networkd[1480]: lo: Gained carrier Oct 2 19:54:55.946498 systemd-networkd[1480]: eth0: Gained IPv6LL Oct 2 19:54:55.947079 systemd-networkd[1480]: Enumeration completed Oct 2 19:54:55.947183 systemd[1]: Started systemd-networkd.service. Oct 2 19:54:55.949078 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:54:55.966198 waagent[1426]: 2023-10-02T19:54:55.951539Z INFO Daemon Daemon Create user account if not exists Oct 2 19:54:55.966198 waagent[1426]: 2023-10-02T19:54:55.953168Z INFO Daemon Daemon User core already exists, skip useradd Oct 2 19:54:55.966198 waagent[1426]: 2023-10-02T19:54:55.954014Z INFO Daemon Daemon Configure sudoer Oct 2 19:54:55.966198 waagent[1426]: 2023-10-02T19:54:55.956924Z INFO Daemon Daemon Configure sshd Oct 2 19:54:55.966198 waagent[1426]: 2023-10-02T19:54:55.959603Z INFO Daemon Daemon Deploy ssh public key. Oct 2 19:54:55.962489 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:54:55.996780 waagent[1426]: 2023-10-02T19:54:55.996665Z INFO Daemon Daemon Decode custom data Oct 2 19:54:55.999395 waagent[1426]: 2023-10-02T19:54:55.999304Z INFO Daemon Daemon Save custom data Oct 2 19:54:56.032634 systemd-networkd[1480]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 2 19:54:56.035726 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:55:26.247915 waagent[1426]: 2023-10-02T19:55:26.247816Z INFO Daemon Daemon Provisioning complete Oct 2 19:55:26.264973 waagent[1426]: 2023-10-02T19:55:26.264889Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 2 19:55:26.271549 waagent[1426]: 2023-10-02T19:55:26.266163Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 2 19:55:26.271549 waagent[1426]: 2023-10-02T19:55:26.267804Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Oct 2 19:55:26.530550 waagent[1489]: 2023-10-02T19:55:26.530363Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Oct 2 19:55:26.531251 waagent[1489]: 2023-10-02T19:55:26.531186Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:55:26.531395 waagent[1489]: 2023-10-02T19:55:26.531340Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:55:26.542211 waagent[1489]: 2023-10-02T19:55:26.542139Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Oct 2 19:55:26.542368 waagent[1489]: 2023-10-02T19:55:26.542315Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Oct 2 19:55:26.602155 waagent[1489]: 2023-10-02T19:55:26.602024Z INFO ExtHandler ExtHandler Found private key matching thumbprint FE7BDF9F945D1251241BB2AFA730C1085FAD98AC Oct 2 19:55:26.602372 waagent[1489]: 2023-10-02T19:55:26.602311Z INFO ExtHandler ExtHandler Certificate with thumbprint CB6EE5B00AEB1C67CB80D8FBF1376CCC4B0728EB has no matching private key. Oct 2 19:55:26.602633 waagent[1489]: 2023-10-02T19:55:26.602582Z INFO ExtHandler ExtHandler Fetch goal state completed Oct 2 19:55:26.621885 waagent[1489]: 2023-10-02T19:55:26.621823Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 53ce6fa3-bc4b-4ab1-bbb3-518e01bf9d2c New eTag: 5119025971182490693] Oct 2 19:55:26.622442 waagent[1489]: 2023-10-02T19:55:26.622383Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Oct 2 19:55:26.707410 waagent[1489]: 2023-10-02T19:55:26.707240Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.0; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Oct 2 19:55:26.718702 waagent[1489]: 2023-10-02T19:55:26.718620Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1489 Oct 2 19:55:26.722063 waagent[1489]: 2023-10-02T19:55:26.722001Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 2 19:55:26.723297 waagent[1489]: 2023-10-02T19:55:26.723242Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 2 19:55:26.787149 waagent[1489]: 2023-10-02T19:55:26.787003Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 2 19:55:26.787709 waagent[1489]: 2023-10-02T19:55:26.787498Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 2 19:55:26.795836 waagent[1489]: 2023-10-02T19:55:26.795779Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 2 19:55:26.796312 waagent[1489]: 2023-10-02T19:55:26.796254Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Oct 2 19:55:26.797381 waagent[1489]: 2023-10-02T19:55:26.797317Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Oct 2 19:55:26.798677 waagent[1489]: 2023-10-02T19:55:26.798618Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 2 19:55:26.799129 waagent[1489]: 2023-10-02T19:55:26.799071Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:55:26.799468 waagent[1489]: 2023-10-02T19:55:26.799415Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:55:26.799989 waagent[1489]: 2023-10-02T19:55:26.799937Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 2 19:55:26.800270 waagent[1489]: 2023-10-02T19:55:26.800216Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 2 19:55:26.800270 waagent[1489]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 2 19:55:26.800270 waagent[1489]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Oct 2 19:55:26.800270 waagent[1489]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 2 19:55:26.800270 waagent[1489]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:55:26.800270 waagent[1489]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:55:26.800270 waagent[1489]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:55:26.802957 waagent[1489]: 2023-10-02T19:55:26.802862Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 2 19:55:26.803176 waagent[1489]: 2023-10-02T19:55:26.803117Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:55:26.803365 waagent[1489]: 2023-10-02T19:55:26.803317Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:55:26.803824 waagent[1489]: 2023-10-02T19:55:26.803772Z INFO EnvHandler ExtHandler Configure routes Oct 2 19:55:26.803972 waagent[1489]: 2023-10-02T19:55:26.803927Z INFO EnvHandler ExtHandler Gateway:None Oct 2 19:55:26.804108 waagent[1489]: 2023-10-02T19:55:26.804065Z INFO EnvHandler ExtHandler Routes:None Oct 2 19:55:26.805211 waagent[1489]: 2023-10-02T19:55:26.805146Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 2 19:55:26.805353 waagent[1489]: 2023-10-02T19:55:26.805303Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 2 19:55:26.805912 waagent[1489]: 2023-10-02T19:55:26.805852Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 2 19:55:26.805994 waagent[1489]: 2023-10-02T19:55:26.805944Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 2 19:55:26.807050 waagent[1489]: 2023-10-02T19:55:26.806998Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 2 19:55:26.816062 waagent[1489]: 2023-10-02T19:55:26.816009Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Oct 2 19:55:26.819481 waagent[1489]: 2023-10-02T19:55:26.818896Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Oct 2 19:55:26.820709 waagent[1489]: 2023-10-02T19:55:26.820648Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Oct 2 19:55:26.830121 waagent[1489]: 2023-10-02T19:55:26.830054Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1480' Oct 2 19:55:26.876619 waagent[1489]: 2023-10-02T19:55:26.876559Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Oct 2 19:55:26.963447 waagent[1489]: 2023-10-02T19:55:26.963324Z INFO MonitorHandler ExtHandler Network interfaces: Oct 2 19:55:26.963447 waagent[1489]: Executing ['ip', '-a', '-o', 'link']: Oct 2 19:55:26.963447 waagent[1489]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 2 19:55:26.963447 waagent[1489]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:1f:e7 brd ff:ff:ff:ff:ff:ff Oct 2 19:55:26.963447 waagent[1489]: 3: enP53948s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:1f:e7 brd ff:ff:ff:ff:ff:ff\ altname enP53948p0s2 Oct 2 19:55:26.963447 waagent[1489]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 2 19:55:26.963447 waagent[1489]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 2 19:55:26.963447 waagent[1489]: 2: eth0 inet 10.200.8.20/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 2 19:55:26.963447 waagent[1489]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 2 19:55:26.963447 waagent[1489]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Oct 2 19:55:26.963447 waagent[1489]: 2: eth0 inet6 fe80::20d:3aff:feb9:1fe7/64 scope link \ valid_lft forever preferred_lft forever Oct 2 19:55:27.111606 waagent[1489]: 2023-10-02T19:55:27.111457Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Oct 2 19:55:27.114679 waagent[1489]: 2023-10-02T19:55:27.114563Z INFO EnvHandler ExtHandler Firewall rules: Oct 2 19:55:27.114679 waagent[1489]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:55:27.114679 waagent[1489]: pkts bytes target prot opt in out source destination Oct 2 19:55:27.114679 waagent[1489]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:55:27.114679 waagent[1489]: pkts bytes target prot opt in out source destination Oct 2 19:55:27.114679 waagent[1489]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:55:27.114679 waagent[1489]: pkts bytes target prot opt in out source destination Oct 2 19:55:27.114679 waagent[1489]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 2 19:55:27.114679 waagent[1489]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 2 19:55:27.116021 waagent[1489]: 2023-10-02T19:55:27.115968Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Oct 2 19:55:27.331334 waagent[1489]: 2023-10-02T19:55:27.331257Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.10.0.3 -- exiting Oct 2 19:55:28.272852 waagent[1426]: 2023-10-02T19:55:28.272678Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Oct 2 19:55:28.279145 waagent[1426]: 2023-10-02T19:55:28.279079Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.10.0.3 to be the latest agent Oct 2 19:55:29.150559 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Oct 2 19:55:29.262469 waagent[1528]: 2023-10-02T19:55:29.262363Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.10.0.3) Oct 2 19:55:29.263166 waagent[1528]: 2023-10-02T19:55:29.263088Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.0 Oct 2 19:55:29.263307 waagent[1528]: 2023-10-02T19:55:29.263255Z INFO ExtHandler ExtHandler Python: 3.9.16 Oct 2 19:55:29.272541 waagent[1528]: 2023-10-02T19:55:29.272435Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.0; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Oct 2 19:55:29.272957 waagent[1528]: 2023-10-02T19:55:29.272901Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:55:29.273108 waagent[1528]: 2023-10-02T19:55:29.273062Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:55:29.284201 waagent[1528]: 2023-10-02T19:55:29.284128Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 2 19:55:29.291996 waagent[1528]: 2023-10-02T19:55:29.291934Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Oct 2 19:55:29.292887 waagent[1528]: 2023-10-02T19:55:29.292828Z INFO ExtHandler Oct 2 19:55:29.293035 waagent[1528]: 2023-10-02T19:55:29.292985Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 31f401e4-1910-41b4-a9fc-4647dd03fd46 eTag: 5119025971182490693 source: Fabric] Oct 2 19:55:29.293721 waagent[1528]: 2023-10-02T19:55:29.293664Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 2 19:55:29.294806 waagent[1528]: 2023-10-02T19:55:29.294748Z INFO ExtHandler Oct 2 19:55:29.294927 waagent[1528]: 2023-10-02T19:55:29.294884Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 2 19:55:29.301395 waagent[1528]: 2023-10-02T19:55:29.301343Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 2 19:55:29.301831 waagent[1528]: 2023-10-02T19:55:29.301784Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Oct 2 19:55:29.319814 waagent[1528]: 2023-10-02T19:55:29.319756Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Oct 2 19:55:29.381265 waagent[1528]: 2023-10-02T19:55:29.381133Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FE7BDF9F945D1251241BB2AFA730C1085FAD98AC', 'hasPrivateKey': True} Oct 2 19:55:29.382228 waagent[1528]: 2023-10-02T19:55:29.382159Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CB6EE5B00AEB1C67CB80D8FBF1376CCC4B0728EB', 'hasPrivateKey': False} Oct 2 19:55:29.383218 waagent[1528]: 2023-10-02T19:55:29.383155Z INFO ExtHandler Fetch goal state completed Oct 2 19:55:29.402745 waagent[1528]: 2023-10-02T19:55:29.402633Z INFO ExtHandler ExtHandler WALinuxAgent-2.10.0.3 running as process 1528 Oct 2 19:55:29.405973 waagent[1528]: 2023-10-02T19:55:29.405911Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 2 19:55:29.407445 waagent[1528]: 2023-10-02T19:55:29.407389Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 2 19:55:29.412222 waagent[1528]: 2023-10-02T19:55:29.412169Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 2 19:55:29.412596 waagent[1528]: 2023-10-02T19:55:29.412540Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 2 19:55:29.420397 waagent[1528]: 2023-10-02T19:55:29.420340Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 2 19:55:29.420886 waagent[1528]: 2023-10-02T19:55:29.420826Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Oct 2 19:55:29.441575 waagent[1528]: 2023-10-02T19:55:29.441431Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Oct 2 19:55:29.444579 waagent[1528]: 2023-10-02T19:55:29.444457Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Oct 2 19:55:29.450013 waagent[1528]: 2023-10-02T19:55:29.449954Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Oct 2 19:55:29.451412 waagent[1528]: 2023-10-02T19:55:29.451355Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 2 19:55:29.452181 waagent[1528]: 2023-10-02T19:55:29.452125Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 2 19:55:29.452453 waagent[1528]: 2023-10-02T19:55:29.452397Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:55:29.452584 waagent[1528]: 2023-10-02T19:55:29.452505Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 19:55:29.452832 waagent[1528]: 2023-10-02T19:55:29.452783Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:55:29.453647 waagent[1528]: 2023-10-02T19:55:29.453590Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 2 19:55:29.453766 waagent[1528]: 2023-10-02T19:55:29.453693Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 19:55:29.453862 waagent[1528]: 2023-10-02T19:55:29.453795Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 2 19:55:29.454502 waagent[1528]: 2023-10-02T19:55:29.454445Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 2 19:55:29.455192 waagent[1528]: 2023-10-02T19:55:29.455134Z INFO EnvHandler ExtHandler Configure routes Oct 2 19:55:29.455829 waagent[1528]: 2023-10-02T19:55:29.455769Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 2 19:55:29.455976 waagent[1528]: 2023-10-02T19:55:29.455922Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 2 19:55:29.455976 waagent[1528]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 2 19:55:29.455976 waagent[1528]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Oct 2 19:55:29.455976 waagent[1528]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 2 19:55:29.455976 waagent[1528]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:55:29.455976 waagent[1528]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:55:29.455976 waagent[1528]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 19:55:29.456248 waagent[1528]: 2023-10-02T19:55:29.456165Z INFO EnvHandler ExtHandler Gateway:None Oct 2 19:55:29.456370 waagent[1528]: 2023-10-02T19:55:29.456305Z INFO EnvHandler ExtHandler Routes:None Oct 2 19:55:29.456537 waagent[1528]: 2023-10-02T19:55:29.456477Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 2 19:55:29.462785 waagent[1528]: 2023-10-02T19:55:29.462727Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 2 19:55:29.488346 waagent[1528]: 2023-10-02T19:55:29.488282Z INFO MonitorHandler ExtHandler Network interfaces: Oct 2 19:55:29.488346 waagent[1528]: Executing ['ip', '-a', '-o', 'link']: Oct 2 19:55:29.488346 waagent[1528]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 2 19:55:29.488346 waagent[1528]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:1f:e7 brd ff:ff:ff:ff:ff:ff Oct 2 19:55:29.488346 waagent[1528]: 3: enP53948s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:1f:e7 brd ff:ff:ff:ff:ff:ff\ altname enP53948p0s2 Oct 2 19:55:29.488346 waagent[1528]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 2 19:55:29.488346 waagent[1528]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 2 19:55:29.488346 waagent[1528]: 2: eth0 inet 10.200.8.20/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 2 19:55:29.488346 waagent[1528]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 2 19:55:29.488346 waagent[1528]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Oct 2 19:55:29.488346 waagent[1528]: 2: eth0 inet6 fe80::20d:3aff:feb9:1fe7/64 scope link \ valid_lft forever preferred_lft forever Oct 2 19:55:29.490134 waagent[1528]: 2023-10-02T19:55:29.490080Z INFO ExtHandler ExtHandler Downloading agent manifest Oct 2 19:55:29.540637 waagent[1528]: 2023-10-02T19:55:29.540562Z INFO ExtHandler ExtHandler Oct 2 19:55:29.541646 waagent[1528]: 2023-10-02T19:55:29.541578Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1c2543dd-bc49-4ad6-91a8-6df4cc774290 correlation 44b56c85-85a5-4d20-be3e-7ddf0eefa575 created: 2023-10-02T19:53:37.534592Z] Oct 2 19:55:29.544542 waagent[1528]: 2023-10-02T19:55:29.544446Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 2 19:55:29.549844 waagent[1528]: 2023-10-02T19:55:29.549790Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Oct 2 19:55:29.560932 waagent[1528]: 2023-10-02T19:55:29.560854Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 2 19:55:29.560932 waagent[1528]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:55:29.560932 waagent[1528]: pkts bytes target prot opt in out source destination Oct 2 19:55:29.560932 waagent[1528]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:55:29.560932 waagent[1528]: pkts bytes target prot opt in out source destination Oct 2 19:55:29.560932 waagent[1528]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 2 19:55:29.560932 waagent[1528]: pkts bytes target prot opt in out source destination Oct 2 19:55:29.560932 waagent[1528]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 2 19:55:29.560932 waagent[1528]: 162 18029 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 2 19:55:29.560932 waagent[1528]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 2 19:55:29.578021 waagent[1528]: 2023-10-02T19:55:29.577959Z INFO ExtHandler ExtHandler Looking for existing remote access users. Oct 2 19:55:29.587932 waagent[1528]: 2023-10-02T19:55:29.587860Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.10.0.3 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EE3B61CC-E95A-481A-8D45-FEB77E56CCB5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Oct 2 19:55:34.478440 update_engine[1320]: I1002 19:55:34.478386 1320 update_attempter.cc:505] Updating boot flags... Oct 2 19:56:14.655437 systemd[1]: Created slice system-sshd.slice. Oct 2 19:56:14.656972 systemd[1]: Started sshd@0-10.200.8.20:22-10.200.12.6:44052.service. Oct 2 19:56:15.719653 sshd[1636]: Accepted publickey for core from 10.200.12.6 port 44052 ssh2: RSA SHA256:cL8ODMZgKrzn60NSELkVQm/2+yvjqGgc1prcSOxmfAg Oct 2 19:56:15.721234 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:15.726209 systemd-logind[1319]: New session 3 of user core. Oct 2 19:56:15.727217 systemd[1]: Started session-3.scope. Oct 2 19:56:16.420441 systemd[1]: Started sshd@1-10.200.8.20:22-10.200.12.6:44054.service. Oct 2 19:56:17.050767 sshd[1641]: Accepted publickey for core from 10.200.12.6 port 44054 ssh2: RSA SHA256:cL8ODMZgKrzn60NSELkVQm/2+yvjqGgc1prcSOxmfAg Oct 2 19:56:17.052331 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:17.057410 systemd[1]: Started session-4.scope. Oct 2 19:56:17.057930 systemd-logind[1319]: New session 4 of user core. Oct 2 19:56:17.547022 sshd[1641]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:17.550074 systemd[1]: sshd@1-10.200.8.20:22-10.200.12.6:44054.service: Deactivated successfully. Oct 2 19:56:17.551058 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:56:17.551863 systemd-logind[1319]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:56:17.552753 systemd-logind[1319]: Removed session 4. Oct 2 19:56:17.651116 systemd[1]: Started sshd@2-10.200.8.20:22-10.200.12.6:44266.service. Oct 2 19:56:18.288987 sshd[1647]: Accepted publickey for core from 10.200.12.6 port 44266 ssh2: RSA SHA256:cL8ODMZgKrzn60NSELkVQm/2+yvjqGgc1prcSOxmfAg Oct 2 19:56:18.290581 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:18.296082 systemd[1]: Started session-5.scope. Oct 2 19:56:18.296945 systemd-logind[1319]: New session 5 of user core. Oct 2 19:56:18.724171 sshd[1647]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:18.727370 systemd[1]: sshd@2-10.200.8.20:22-10.200.12.6:44266.service: Deactivated successfully. Oct 2 19:56:18.728135 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:56:18.728726 systemd-logind[1319]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:56:18.729431 systemd-logind[1319]: Removed session 5. Oct 2 19:56:18.829759 systemd[1]: Started sshd@3-10.200.8.20:22-10.200.12.6:44268.service. Oct 2 19:56:19.453479 sshd[1653]: Accepted publickey for core from 10.200.12.6 port 44268 ssh2: RSA SHA256:cL8ODMZgKrzn60NSELkVQm/2+yvjqGgc1prcSOxmfAg Oct 2 19:56:19.455081 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:19.460330 systemd[1]: Started session-6.scope. Oct 2 19:56:19.461022 systemd-logind[1319]: New session 6 of user core. Oct 2 19:56:19.896438 sshd[1653]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:19.899792 systemd[1]: sshd@3-10.200.8.20:22-10.200.12.6:44268.service: Deactivated successfully. Oct 2 19:56:19.900711 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:56:19.901432 systemd-logind[1319]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:56:19.902319 systemd-logind[1319]: Removed session 6. Oct 2 19:56:20.000871 systemd[1]: Started sshd@4-10.200.8.20:22-10.200.12.6:44282.service. Oct 2 19:56:20.624656 sshd[1659]: Accepted publickey for core from 10.200.12.6 port 44282 ssh2: RSA SHA256:cL8ODMZgKrzn60NSELkVQm/2+yvjqGgc1prcSOxmfAg Oct 2 19:56:20.625947 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:20.630512 systemd[1]: Started session-7.scope. Oct 2 19:56:20.631216 systemd-logind[1319]: New session 7 of user core. Oct 2 19:56:21.190351 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:56:21.190697 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:21.210747 dbus-daemon[1306]: \xd0\u001d\xdbu\xf1U: received setenforce notice (enforcing=2105795936) Oct 2 19:56:21.212738 sudo[1662]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:21.324406 sshd[1659]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:21.327664 systemd[1]: sshd@4-10.200.8.20:22-10.200.12.6:44282.service: Deactivated successfully. Oct 2 19:56:21.328509 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:56:21.329134 systemd-logind[1319]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:56:21.329932 systemd-logind[1319]: Removed session 7. Oct 2 19:56:21.427852 systemd[1]: Started sshd@5-10.200.8.20:22-10.200.12.6:44286.service. Oct 2 19:56:22.063145 sshd[1666]: Accepted publickey for core from 10.200.12.6 port 44286 ssh2: RSA SHA256:cL8ODMZgKrzn60NSELkVQm/2+yvjqGgc1prcSOxmfAg Oct 2 19:56:22.065857 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:22.070354 systemd[1]: Started session-8.scope. Oct 2 19:56:22.070824 systemd-logind[1319]: New session 8 of user core. Oct 2 19:56:22.405173 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:56:22.405434 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:22.408275 sudo[1670]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:22.412700 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:56:22.412957 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:22.421456 systemd[1]: Stopping audit-rules.service... Oct 2 19:56:22.421000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:56:22.425299 auditctl[1673]: No rules Oct 2 19:56:22.432387 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 2 19:56:22.432472 kernel: audit: type=1305 audit(1696276582.421:175): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:56:22.425738 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:56:22.425903 systemd[1]: Stopped audit-rules.service. Oct 2 19:56:22.433315 systemd[1]: Starting audit-rules.service... Oct 2 19:56:22.421000 audit[1673]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff164f1b40 a2=420 a3=0 items=0 ppid=1 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:22.452288 kernel: audit: type=1300 audit(1696276582.421:175): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff164f1b40 a2=420 a3=0 items=0 ppid=1 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:22.452360 kernel: audit: type=1327 audit(1696276582.421:175): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:56:22.421000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:56:22.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.454651 augenrules[1690]: No rules Oct 2 19:56:22.460824 systemd[1]: Finished audit-rules.service. Oct 2 19:56:22.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.462937 sudo[1669]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:22.475897 kernel: audit: type=1131 audit(1696276582.424:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.475992 kernel: audit: type=1130 audit(1696276582.459:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.476027 kernel: audit: type=1106 audit(1696276582.461:178): pid=1669 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.461000 audit[1669]: USER_END pid=1669 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.461000 audit[1669]: CRED_DISP pid=1669 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.498936 kernel: audit: type=1104 audit(1696276582.461:179): pid=1669 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.562056 sshd[1666]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:22.561000 audit[1666]: USER_END pid=1666 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:22.565417 systemd-logind[1319]: Session 8 logged out. Waiting for processes to exit. Oct 2 19:56:22.566246 systemd[1]: sshd@5-10.200.8.20:22-10.200.12.6:44286.service: Deactivated successfully. Oct 2 19:56:22.567085 systemd[1]: session-8.scope: Deactivated successfully. Oct 2 19:56:22.568226 systemd-logind[1319]: Removed session 8. Oct 2 19:56:22.561000 audit[1666]: CRED_DISP pid=1666 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:22.596912 kernel: audit: type=1106 audit(1696276582.561:180): pid=1666 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:22.597000 kernel: audit: type=1104 audit(1696276582.561:181): pid=1666 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:22.597023 kernel: audit: type=1131 audit(1696276582.562:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.20:22-10.200.12.6:44286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.20:22-10.200.12.6:44286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.666387 systemd[1]: Started sshd@6-10.200.8.20:22-10.200.12.6:44298.service. Oct 2 19:56:22.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.20:22-10.200.12.6:44298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:23.293000 audit[1696]: USER_ACCT pid=1696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:23.295668 sshd[1696]: Accepted publickey for core from 10.200.12.6 port 44298 ssh2: RSA SHA256:cL8ODMZgKrzn60NSELkVQm/2+yvjqGgc1prcSOxmfAg Oct 2 19:56:23.295000 audit[1696]: CRED_ACQ pid=1696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:23.295000 audit[1696]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9d9707b0 a2=3 a3=0 items=0 ppid=1 pid=1696 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:23.295000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:56:23.297278 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:23.302417 systemd[1]: Started session-9.scope. Oct 2 19:56:23.302870 systemd-logind[1319]: New session 9 of user core. Oct 2 19:56:23.305000 audit[1696]: USER_START pid=1696 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:23.307000 audit[1698]: CRED_ACQ pid=1698 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:23.636000 audit[1699]: USER_ACCT pid=1699 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:23.637000 audit[1699]: CRED_REFR pid=1699 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:23.637475 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:56:23.638260 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:23.639000 audit[1699]: USER_START pid=1699 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:24.317017 systemd[1]: Reloading. Oct 2 19:56:24.412973 /usr/lib/systemd/system-generators/torcx-generator[1737]: time="2023-10-02T19:56:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:24.413374 /usr/lib/systemd/system-generators/torcx-generator[1737]: time="2023-10-02T19:56:24Z" level=info msg="torcx already run" Oct 2 19:56:24.472493 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:24.472512 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:24.488442 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit: BPF prog-id=38 op=LOAD Oct 2 19:56:24.556000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit: BPF prog-id=39 op=LOAD Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.556000 audit: BPF prog-id=40 op=LOAD Oct 2 19:56:24.556000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:56:24.556000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:56:24.558000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.558000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.558000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.558000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.558000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.558000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.558000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.558000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.558000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.559000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.559000 audit: BPF prog-id=41 op=LOAD Oct 2 19:56:24.559000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit: BPF prog-id=42 op=LOAD Oct 2 19:56:24.560000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit: BPF prog-id=43 op=LOAD Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.560000 audit: BPF prog-id=44 op=LOAD Oct 2 19:56:24.560000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:56:24.560000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit: BPF prog-id=45 op=LOAD Oct 2 19:56:24.562000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit: BPF prog-id=46 op=LOAD Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.562000 audit: BPF prog-id=47 op=LOAD Oct 2 19:56:24.562000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:56:24.562000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:56:24.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit: BPF prog-id=48 op=LOAD Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit: BPF prog-id=49 op=LOAD Oct 2 19:56:24.564000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:56:24.564000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.564000 audit: BPF prog-id=50 op=LOAD Oct 2 19:56:24.564000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:56:24.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.565000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.565000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.566000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.566000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.566000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.566000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.566000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.566000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.566000 audit: BPF prog-id=51 op=LOAD Oct 2 19:56:24.566000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:56:24.569000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.569000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.569000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.569000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.569000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.569000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.569000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.569000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.569000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.569000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.569000 audit: BPF prog-id=52 op=LOAD Oct 2 19:56:24.569000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:56:24.578364 systemd[1]: Started kubelet.service. Oct 2 19:56:24.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:24.593928 systemd[1]: Starting coreos-metadata.service... Oct 2 19:56:24.659449 kubelet[1791]: E1002 19:56:24.659388 1791 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:56:24.660659 coreos-metadata[1798]: Oct 02 19:56:24.660 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 2 19:56:24.661658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:56:24.661825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:56:24.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:56:24.663126 coreos-metadata[1798]: Oct 02 19:56:24.663 INFO Fetch successful Oct 2 19:56:24.663230 coreos-metadata[1798]: Oct 02 19:56:24.663 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 2 19:56:24.664408 coreos-metadata[1798]: Oct 02 19:56:24.664 INFO Fetch successful Oct 2 19:56:24.664817 coreos-metadata[1798]: Oct 02 19:56:24.664 INFO Fetching http://168.63.129.16/machine/472dfd82-aa50-4660-97a6-fdac7489e5fa/1d96590c%2D0f70%2D404e%2Db30e%2D830c12011656.%5Fci%2D3510.3.0%2Da%2Dd5a4e3b63c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 2 19:56:24.666378 coreos-metadata[1798]: Oct 02 19:56:24.666 INFO Fetch successful Oct 2 19:56:24.699298 coreos-metadata[1798]: Oct 02 19:56:24.699 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 2 19:56:24.708509 coreos-metadata[1798]: Oct 02 19:56:24.708 INFO Fetch successful Oct 2 19:56:24.717184 systemd[1]: Finished coreos-metadata.service. Oct 2 19:56:24.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:27.136841 systemd[1]: Stopped kubelet.service. Oct 2 19:56:27.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:27.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:27.150597 systemd[1]: Reloading. Oct 2 19:56:27.201106 /usr/lib/systemd/system-generators/torcx-generator[1854]: time="2023-10-02T19:56:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:27.201594 /usr/lib/systemd/system-generators/torcx-generator[1854]: time="2023-10-02T19:56:27Z" level=info msg="torcx already run" Oct 2 19:56:27.304119 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:27.304138 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:27.320267 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:27.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit: BPF prog-id=53 op=LOAD Oct 2 19:56:27.388000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit: BPF prog-id=54 op=LOAD Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.388000 audit: BPF prog-id=55 op=LOAD Oct 2 19:56:27.388000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:56:27.388000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:56:27.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.390000 audit: BPF prog-id=56 op=LOAD Oct 2 19:56:27.390000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit: BPF prog-id=57 op=LOAD Oct 2 19:56:27.392000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit: BPF prog-id=58 op=LOAD Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.392000 audit: BPF prog-id=59 op=LOAD Oct 2 19:56:27.392000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:56:27.392000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit: BPF prog-id=60 op=LOAD Oct 2 19:56:27.394000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit: BPF prog-id=61 op=LOAD Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.394000 audit: BPF prog-id=62 op=LOAD Oct 2 19:56:27.394000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:56:27.394000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:56:27.395000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.395000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.395000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.395000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.395000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.395000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.395000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.395000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit: BPF prog-id=63 op=LOAD Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit: BPF prog-id=64 op=LOAD Oct 2 19:56:27.396000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:56:27.396000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.397000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.397000 audit: BPF prog-id=65 op=LOAD Oct 2 19:56:27.397000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:56:27.398000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.398000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.398000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.398000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.398000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.398000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.398000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.398000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.398000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.398000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.398000 audit: BPF prog-id=66 op=LOAD Oct 2 19:56:27.398000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:56:27.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.400000 audit: BPF prog-id=67 op=LOAD Oct 2 19:56:27.400000 audit: BPF prog-id=52 op=UNLOAD Oct 2 19:56:27.416862 systemd[1]: Started kubelet.service. Oct 2 19:56:27.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:27.463987 kubelet[1918]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:56:27.464303 kubelet[1918]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:56:27.464343 kubelet[1918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:56:27.464499 kubelet[1918]: I1002 19:56:27.464466 1918 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:56:27.466156 kubelet[1918]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:56:27.466156 kubelet[1918]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:56:27.466156 kubelet[1918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:56:27.773307 kubelet[1918]: I1002 19:56:27.773207 1918 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:56:27.773307 kubelet[1918]: I1002 19:56:27.773235 1918 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:56:27.773775 kubelet[1918]: I1002 19:56:27.773755 1918 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:56:27.781487 kubelet[1918]: I1002 19:56:27.781470 1918 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:56:27.784377 kubelet[1918]: I1002 19:56:27.784357 1918 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:56:27.784659 kubelet[1918]: I1002 19:56:27.784644 1918 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:56:27.784755 kubelet[1918]: I1002 19:56:27.784742 1918 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:56:27.784902 kubelet[1918]: I1002 19:56:27.784782 1918 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:56:27.784902 kubelet[1918]: I1002 19:56:27.784798 1918 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:56:27.784990 kubelet[1918]: I1002 19:56:27.784920 1918 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:56:27.788174 kubelet[1918]: I1002 19:56:27.788158 1918 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:56:27.788270 kubelet[1918]: I1002 19:56:27.788189 1918 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:56:27.788270 kubelet[1918]: I1002 19:56:27.788210 1918 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:56:27.788270 kubelet[1918]: I1002 19:56:27.788224 1918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:56:27.788747 kubelet[1918]: E1002 19:56:27.788731 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:27.788856 kubelet[1918]: E1002 19:56:27.788844 1918 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:27.789230 kubelet[1918]: I1002 19:56:27.789216 1918 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:56:27.789599 kubelet[1918]: W1002 19:56:27.789580 1918 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:56:27.789975 kubelet[1918]: I1002 19:56:27.789957 1918 server.go:1175] "Started kubelet" Oct 2 19:56:27.790859 kubelet[1918]: E1002 19:56:27.790845 1918 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:56:27.790986 kubelet[1918]: E1002 19:56:27.790975 1918 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:56:27.789000 audit[1918]: AVC avc: denied { mac_admin } for pid=1918 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.791909 kubelet[1918]: I1002 19:56:27.791894 1918 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:56:27.792674 kubelet[1918]: I1002 19:56:27.792660 1918 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:56:27.794324 kernel: kauditd_printk_skb: 361 callbacks suppressed Oct 2 19:56:27.794387 kernel: audit: type=1400 audit(1696276587.789:542): avc: denied { mac_admin } for pid=1918 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.799637 kubelet[1918]: I1002 19:56:27.799622 1918 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:56:27.799743 kubelet[1918]: I1002 19:56:27.799734 1918 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:56:27.799857 kubelet[1918]: I1002 19:56:27.799848 1918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:56:27.801185 kubelet[1918]: I1002 19:56:27.801174 1918 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:56:27.802535 kubelet[1918]: I1002 19:56:27.802513 1918 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:56:27.803341 kubelet[1918]: E1002 19:56:27.803327 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:27.789000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:27.812543 kernel: audit: type=1401 audit(1696276587.789:542): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:27.789000 audit[1918]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00024c960 a1=c0006e7f80 a2=c00024c930 a3=25 items=0 ppid=1 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.830560 kernel: audit: type=1300 audit(1696276587.789:542): arch=c000003e syscall=188 success=no exit=-22 a0=c00024c960 a1=c0006e7f80 a2=c00024c930 a3=25 items=0 ppid=1 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.789000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:27.843747 kubelet[1918]: W1002 19:56:27.843717 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.8.20" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:27.843862 kubelet[1918]: E1002 19:56:27.843853 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.20" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:27.843977 kubelet[1918]: W1002 19:56:27.843963 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:27.844054 kubelet[1918]: E1002 19:56:27.844047 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:27.850672 kernel: audit: type=1327 audit(1696276587.789:542): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:27.798000 audit[1918]: AVC avc: denied { mac_admin } for pid=1918 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.853535 kubelet[1918]: I1002 19:56:27.853513 1918 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:56:27.853626 kubelet[1918]: I1002 19:56:27.853618 1918 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:56:27.853692 kubelet[1918]: I1002 19:56:27.853684 1918 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:56:27.854238 kubelet[1918]: W1002 19:56:27.854220 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:27.854333 kubelet[1918]: E1002 19:56:27.854324 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:27.854447 kubelet[1918]: E1002 19:56:27.854436 1918 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.200.8.20" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:27.854658 kubelet[1918]: E1002 19:56:27.854557 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c007d61b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 789932059, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 789932059, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:27.855397 kubelet[1918]: E1002 19:56:27.855340 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c01792c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 790963399, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 790963399, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:27.856127 kubelet[1918]: E1002 19:56:27.856076 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8a8ac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.20 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852900524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852900524, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:27.856785 kubelet[1918]: E1002 19:56:27.856738 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8bb08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.20 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852905224, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852905224, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:27.857414 kubelet[1918]: E1002 19:56:27.857364 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8caa9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.20 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852909225, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852909225, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:27.870180 kernel: audit: type=1400 audit(1696276587.798:543): avc: denied { mac_admin } for pid=1918 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.870265 kernel: audit: type=1401 audit(1696276587.798:543): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:27.798000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:27.891491 kernel: audit: type=1300 audit(1696276587.798:543): arch=c000003e syscall=188 success=no exit=-22 a0=c00049db40 a1=c0006e7f98 a2=c00024c9f0 a3=25 items=0 ppid=1 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.798000 audit[1918]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00049db40 a1=c0006e7f98 a2=c00024c9f0 a3=25 items=0 ppid=1 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.891670 kubelet[1918]: I1002 19:56:27.890639 1918 policy_none.go:49] "None policy: Start" Oct 2 19:56:27.891670 kubelet[1918]: I1002 19:56:27.891455 1918 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:56:27.891670 kubelet[1918]: I1002 19:56:27.891570 1918 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:56:27.798000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:27.903370 kubelet[1918]: E1002 19:56:27.903354 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:27.903759 kubelet[1918]: I1002 19:56:27.903748 1918 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.20" Oct 2 19:56:27.904659 kubelet[1918]: E1002 19:56:27.904644 1918 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.20" Oct 2 19:56:27.904952 kubelet[1918]: E1002 19:56:27.904893 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8a8ac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.20 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852900524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 903714536, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8a8ac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:27.905744 kubelet[1918]: E1002 19:56:27.905679 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8bb08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.20 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852905224, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 903718936, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8bb08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:27.906548 kubelet[1918]: E1002 19:56:27.906475 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8caa9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.20 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852909225, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 903722537, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8caa9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:27.816000 audit[1931]: NETFILTER_CFG table=mangle:6 family=2 entries=2 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.918847 kernel: audit: type=1327 audit(1696276587.798:543): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:27.918885 kernel: audit: type=1325 audit(1696276587.816:544): table=mangle:6 family=2 entries=2 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.918901 kernel: audit: type=1300 audit(1696276587.816:544): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc83ff58c0 a2=0 a3=7ffc83ff58ac items=0 ppid=1918 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.816000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc83ff58c0 a2=0 a3=7ffc83ff58ac items=0 ppid=1918 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:56:27.820000 audit[1932]: NETFILTER_CFG table=filter:7 family=2 entries=2 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.820000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffffddc78f0 a2=0 a3=7ffffddc78dc items=0 ppid=1918 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.820000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:56:27.823000 audit[1934]: NETFILTER_CFG table=filter:8 family=2 entries=2 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.823000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc02db8670 a2=0 a3=7ffc02db865c items=0 ppid=1918 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:56:27.826000 audit[1936]: NETFILTER_CFG table=filter:9 family=2 entries=2 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.826000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe2876dca0 a2=0 a3=7ffe2876dc8c items=0 ppid=1918 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:56:27.892000 audit[1943]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.892000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffec45ee7a0 a2=0 a3=7ffec45ee78c items=0 ppid=1918 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:56:27.893000 audit[1944]: NETFILTER_CFG table=nat:11 family=2 entries=2 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.893000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdb51aa9b0 a2=0 a3=7ffdb51aa99c items=0 ppid=1918 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:56:27.943345 systemd[1]: Created slice kubepods.slice. Oct 2 19:56:27.941000 audit[1947]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.941000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd0d8dde80 a2=0 a3=7ffd0d8dde6c items=0 ppid=1918 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:56:27.948674 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:56:27.951324 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:56:27.960353 kubelet[1918]: I1002 19:56:27.960331 1918 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:56:27.958000 audit[1950]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.958000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffdd018e560 a2=0 a3=7ffdd018e54c items=0 ppid=1918 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.958000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:56:27.959000 audit[1918]: AVC avc: denied { mac_admin } for pid=1918 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:27.959000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:27.959000 audit[1918]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000cfa030 a1=c000b88ae0 a2=c000cfa000 a3=25 items=0 ppid=1 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.959000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:27.960935 kubelet[1918]: I1002 19:56:27.960742 1918 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:56:27.960935 kubelet[1918]: I1002 19:56:27.960930 1918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:56:27.963185 kubelet[1918]: E1002 19:56:27.963170 1918 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.20\" not found" Oct 2 19:56:27.964319 kubelet[1918]: E1002 19:56:27.964237 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291ca579b7b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 962932091, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 962932091, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:27.962000 audit[1951]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.962000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffee766c480 a2=0 a3=7ffee766c46c items=0 ppid=1918 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:56:27.964000 audit[1952]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.964000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc628b2ba0 a2=0 a3=7ffc628b2b8c items=0 ppid=1918 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.964000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:56:27.966000 audit[1954]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.966000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd0dbbea00 a2=0 a3=7ffd0dbbe9ec items=0 ppid=1918 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.966000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:56:28.003632 kubelet[1918]: E1002 19:56:28.003592 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:27.969000 audit[1956]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:27.969000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffeafedcd60 a2=0 a3=7ffeafedcd4c items=0 ppid=1918 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:27.969000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:56:28.023000 audit[1959]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_rule pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:28.023000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd108a45f0 a2=0 a3=7ffd108a45dc items=0 ppid=1918 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.023000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:56:28.026000 audit[1961]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_rule pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:28.026000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe901bbe40 a2=0 a3=7ffe901bbe2c items=0 ppid=1918 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:56:28.056296 kubelet[1918]: E1002 19:56:28.056248 1918 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.200.8.20" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:28.062000 audit[1964]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_rule pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:28.062000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffcc0ae7780 a2=0 a3=7ffcc0ae776c items=0 ppid=1918 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:56:28.064479 kubelet[1918]: I1002 19:56:28.064284 1918 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:56:28.064000 audit[1966]: NETFILTER_CFG table=mangle:21 family=2 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:28.064000 audit[1965]: NETFILTER_CFG table=mangle:22 family=10 entries=2 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.064000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc17d668c0 a2=0 a3=7ffc17d668ac items=0 ppid=1918 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.064000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:56:28.064000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd52dc7610 a2=0 a3=7ffd52dc75fc items=0 ppid=1918 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.064000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:56:28.066000 audit[1967]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.066000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd6fe0a800 a2=0 a3=7ffd6fe0a7ec items=0 ppid=1918 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:56:28.066000 audit[1968]: NETFILTER_CFG table=nat:24 family=2 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:28.066000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe148cf390 a2=0 a3=10e3 items=0 ppid=1918 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.066000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:56:28.069000 audit[1970]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:28.069000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeced9bfc0 a2=0 a3=7ffeced9bfac items=0 ppid=1918 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:56:28.069000 audit[1971]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_rule pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.069000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff1c484d00 a2=0 a3=7fff1c484cec items=0 ppid=1918 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.069000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:56:28.070000 audit[1972]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.070000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff587b9820 a2=0 a3=7fff587b980c items=0 ppid=1918 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:56:28.073000 audit[1974]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.073000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffc90337a20 a2=0 a3=7ffc90337a0c items=0 ppid=1918 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:56:28.074000 audit[1975]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.074000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe77033d90 a2=0 a3=7ffe77033d7c items=0 ppid=1918 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:56:28.075000 audit[1976]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.075000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4be8ad50 a2=0 a3=7ffc4be8ad3c items=0 ppid=1918 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.075000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:56:28.077000 audit[1978]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.077000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff76537080 a2=0 a3=7fff7653706c items=0 ppid=1918 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:56:28.079000 audit[1980]: NETFILTER_CFG table=nat:32 family=10 entries=2 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.079000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffea4519660 a2=0 a3=7ffea451964c items=0 ppid=1918 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:56:28.081000 audit[1982]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.081000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffee59aa7d0 a2=0 a3=7ffee59aa7bc items=0 ppid=1918 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:56:28.083000 audit[1984]: NETFILTER_CFG table=nat:34 family=10 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.083000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe39c8bab0 a2=0 a3=7ffe39c8ba9c items=0 ppid=1918 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:56:28.086000 audit[1986]: NETFILTER_CFG table=nat:35 family=10 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.086000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffce6d15200 a2=0 a3=7ffce6d151ec items=0 ppid=1918 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.086000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:56:28.088155 kubelet[1918]: I1002 19:56:28.088129 1918 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:56:28.088243 kubelet[1918]: I1002 19:56:28.088162 1918 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:56:28.088243 kubelet[1918]: I1002 19:56:28.088183 1918 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:56:28.088243 kubelet[1918]: E1002 19:56:28.088228 1918 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:56:28.087000 audit[1987]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.087000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc45d53aa0 a2=0 a3=7ffc45d53a8c items=0 ppid=1918 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:56:28.088000 audit[1988]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.088000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe64969060 a2=0 a3=7ffe6496904c items=0 ppid=1918 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:56:28.090991 kubelet[1918]: W1002 19:56:28.090966 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:28.091061 kubelet[1918]: E1002 19:56:28.091001 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:28.089000 audit[1989]: NETFILTER_CFG table=filter:38 family=10 entries=1 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:28.089000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5cafdc60 a2=0 a3=7ffc5cafdc4c items=0 ppid=1918 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.089000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:56:28.104006 kubelet[1918]: E1002 19:56:28.103976 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:28.105639 kubelet[1918]: I1002 19:56:28.105621 1918 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.20" Oct 2 19:56:28.106820 kubelet[1918]: E1002 19:56:28.106799 1918 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.20" Oct 2 19:56:28.106913 kubelet[1918]: E1002 19:56:28.106801 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8a8ac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.20 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852900524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 28, 105585183, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8a8ac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:28.107739 kubelet[1918]: E1002 19:56:28.107676 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8bb08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.20 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852905224, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 28, 105595184, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8bb08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:28.192468 kubelet[1918]: E1002 19:56:28.192358 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8caa9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.20 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852909225, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 28, 105599085, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8caa9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:28.204686 kubelet[1918]: E1002 19:56:28.204651 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:28.305359 kubelet[1918]: E1002 19:56:28.305316 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:28.405866 kubelet[1918]: E1002 19:56:28.405820 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:28.458508 kubelet[1918]: E1002 19:56:28.458460 1918 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.200.8.20" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:28.505988 kubelet[1918]: E1002 19:56:28.505939 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:28.507692 kubelet[1918]: I1002 19:56:28.507666 1918 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.20" Oct 2 19:56:28.508885 kubelet[1918]: E1002 19:56:28.508861 1918 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.20" Oct 2 19:56:28.509023 kubelet[1918]: E1002 19:56:28.508853 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8a8ac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.20 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852900524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 28, 507623964, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8a8ac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:28.591861 kubelet[1918]: E1002 19:56:28.591678 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8bb08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.20 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852905224, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 28, 507635466, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8bb08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:28.606989 kubelet[1918]: E1002 19:56:28.606949 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:28.707766 kubelet[1918]: E1002 19:56:28.707682 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:28.723026 kubelet[1918]: W1002 19:56:28.722995 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:28.723151 kubelet[1918]: E1002 19:56:28.723032 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:28.789509 kubelet[1918]: E1002 19:56:28.789447 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:28.791803 kubelet[1918]: E1002 19:56:28.791708 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8caa9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.20 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852909225, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 28, 507640266, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8caa9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:28.808031 kubelet[1918]: E1002 19:56:28.808004 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:28.909041 kubelet[1918]: E1002 19:56:28.908917 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:28.961418 kubelet[1918]: W1002 19:56:28.961383 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.8.20" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:28.961418 kubelet[1918]: E1002 19:56:28.961420 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.20" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:29.009922 kubelet[1918]: E1002 19:56:29.009878 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:29.038437 kubelet[1918]: W1002 19:56:29.038398 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:29.038437 kubelet[1918]: E1002 19:56:29.038437 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:29.110066 kubelet[1918]: E1002 19:56:29.110007 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:29.210721 kubelet[1918]: E1002 19:56:29.210597 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:29.259978 kubelet[1918]: E1002 19:56:29.259931 1918 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.200.8.20" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:29.310317 kubelet[1918]: I1002 19:56:29.310277 1918 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.20" Oct 2 19:56:29.310701 kubelet[1918]: E1002 19:56:29.310652 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:29.311612 kubelet[1918]: E1002 19:56:29.311588 1918 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.20" Oct 2 19:56:29.311729 kubelet[1918]: E1002 19:56:29.311578 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8a8ac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.20 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852900524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 29, 310227434, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8a8ac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:29.312704 kubelet[1918]: E1002 19:56:29.312622 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8bb08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.20 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852905224, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 29, 310241936, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8bb08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:29.359052 kubelet[1918]: W1002 19:56:29.359015 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:29.359052 kubelet[1918]: E1002 19:56:29.359054 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:29.392042 kubelet[1918]: E1002 19:56:29.391936 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8caa9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.20 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852909225, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 29, 310247537, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8caa9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:29.411306 kubelet[1918]: E1002 19:56:29.411264 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:29.512010 kubelet[1918]: E1002 19:56:29.511881 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:29.612356 kubelet[1918]: E1002 19:56:29.612316 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:29.712823 kubelet[1918]: E1002 19:56:29.712771 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:29.790345 kubelet[1918]: E1002 19:56:29.790311 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:29.813684 kubelet[1918]: E1002 19:56:29.813647 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:29.914654 kubelet[1918]: E1002 19:56:29.914607 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:30.015540 kubelet[1918]: E1002 19:56:30.015476 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:30.115709 kubelet[1918]: E1002 19:56:30.115586 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:30.216102 kubelet[1918]: E1002 19:56:30.216053 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:30.316930 kubelet[1918]: E1002 19:56:30.316877 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:30.417659 kubelet[1918]: E1002 19:56:30.417429 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:30.517959 kubelet[1918]: E1002 19:56:30.517911 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:30.618433 kubelet[1918]: E1002 19:56:30.618380 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:30.621802 kubelet[1918]: W1002 19:56:30.621768 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.8.20" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:30.621802 kubelet[1918]: E1002 19:56:30.621805 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.20" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:30.719366 kubelet[1918]: E1002 19:56:30.719239 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:30.790739 kubelet[1918]: E1002 19:56:30.790680 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:30.820215 kubelet[1918]: E1002 19:56:30.820169 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:30.861777 kubelet[1918]: E1002 19:56:30.861729 1918 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.200.8.20" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:30.880838 kubelet[1918]: W1002 19:56:30.880807 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:30.880838 kubelet[1918]: E1002 19:56:30.880840 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:30.913495 kubelet[1918]: I1002 19:56:30.913141 1918 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.20" Oct 2 19:56:30.914091 kubelet[1918]: E1002 19:56:30.914060 1918 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.20" Oct 2 19:56:30.914299 kubelet[1918]: E1002 19:56:30.914196 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8a8ac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.20 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852900524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 30, 913086036, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8a8ac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:30.915150 kubelet[1918]: E1002 19:56:30.915082 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8bb08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.20 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852905224, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 30, 913097738, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8bb08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:30.915993 kubelet[1918]: E1002 19:56:30.915930 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8caa9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.20 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852909225, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 30, 913102638, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8caa9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:30.921038 kubelet[1918]: E1002 19:56:30.921022 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:31.022098 kubelet[1918]: E1002 19:56:31.021961 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:31.122926 kubelet[1918]: E1002 19:56:31.122873 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:31.223553 kubelet[1918]: E1002 19:56:31.223483 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:31.324289 kubelet[1918]: E1002 19:56:31.324251 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:31.424835 kubelet[1918]: E1002 19:56:31.424788 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:31.525403 kubelet[1918]: E1002 19:56:31.525351 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:31.626100 kubelet[1918]: E1002 19:56:31.625894 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:31.697127 kubelet[1918]: W1002 19:56:31.697086 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:31.697127 kubelet[1918]: E1002 19:56:31.697126 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:31.726410 kubelet[1918]: E1002 19:56:31.726366 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:31.791820 kubelet[1918]: E1002 19:56:31.791758 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:31.827264 kubelet[1918]: E1002 19:56:31.827220 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:31.928084 kubelet[1918]: E1002 19:56:31.927967 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.028756 kubelet[1918]: E1002 19:56:32.028709 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.129679 kubelet[1918]: E1002 19:56:32.129628 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.145102 kubelet[1918]: W1002 19:56:32.145074 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:32.145211 kubelet[1918]: E1002 19:56:32.145110 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:32.230639 kubelet[1918]: E1002 19:56:32.230500 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.331273 kubelet[1918]: E1002 19:56:32.331224 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.431731 kubelet[1918]: E1002 19:56:32.431691 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.532586 kubelet[1918]: E1002 19:56:32.532446 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.632889 kubelet[1918]: E1002 19:56:32.632839 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.733463 kubelet[1918]: E1002 19:56:32.733408 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.792816 kubelet[1918]: E1002 19:56:32.792771 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:32.834323 kubelet[1918]: E1002 19:56:32.834275 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.935069 kubelet[1918]: E1002 19:56:32.935027 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:32.961878 kubelet[1918]: E1002 19:56:32.961835 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:33.035494 kubelet[1918]: E1002 19:56:33.035438 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:33.136446 kubelet[1918]: E1002 19:56:33.136316 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:33.236850 kubelet[1918]: E1002 19:56:33.236804 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:33.337624 kubelet[1918]: E1002 19:56:33.337567 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:33.438259 kubelet[1918]: E1002 19:56:33.438135 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:33.538654 kubelet[1918]: E1002 19:56:33.538608 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:33.639126 kubelet[1918]: E1002 19:56:33.639071 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:33.739687 kubelet[1918]: E1002 19:56:33.739570 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:33.793049 kubelet[1918]: E1002 19:56:33.792989 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:33.840482 kubelet[1918]: E1002 19:56:33.840436 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:33.941223 kubelet[1918]: E1002 19:56:33.941180 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:34.041854 kubelet[1918]: E1002 19:56:34.041813 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:34.063388 kubelet[1918]: E1002 19:56:34.063330 1918 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.200.8.20" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:34.114721 kubelet[1918]: I1002 19:56:34.114686 1918 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.20" Oct 2 19:56:34.115710 kubelet[1918]: E1002 19:56:34.115686 1918 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.20" Oct 2 19:56:34.116166 kubelet[1918]: E1002 19:56:34.116093 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8a8ac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.20 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852900524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 34, 114643933, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8a8ac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:34.116964 kubelet[1918]: E1002 19:56:34.116908 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8bb08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.20 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852905224, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 34, 114653535, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8bb08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:34.117724 kubelet[1918]: E1002 19:56:34.117668 1918 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.20.178a6291c3c8caa9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.20", UID:"10.200.8.20", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.20 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.20"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 27, 852909225, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 34, 114661236, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.20.178a6291c3c8caa9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:34.142061 kubelet[1918]: E1002 19:56:34.141981 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:34.242516 kubelet[1918]: E1002 19:56:34.242457 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:34.264058 kubelet[1918]: W1002 19:56:34.264024 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:34.264058 kubelet[1918]: E1002 19:56:34.264060 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:34.342765 kubelet[1918]: E1002 19:56:34.342643 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:34.443169 kubelet[1918]: E1002 19:56:34.443128 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:34.543655 kubelet[1918]: E1002 19:56:34.543615 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:34.644306 kubelet[1918]: E1002 19:56:34.644179 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:34.744700 kubelet[1918]: E1002 19:56:34.744655 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:34.794193 kubelet[1918]: E1002 19:56:34.794137 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:34.844816 kubelet[1918]: E1002 19:56:34.844778 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:34.945174 kubelet[1918]: E1002 19:56:34.945062 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.045817 kubelet[1918]: E1002 19:56:35.045770 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.146825 kubelet[1918]: E1002 19:56:35.146772 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.247519 kubelet[1918]: E1002 19:56:35.247385 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.348040 kubelet[1918]: E1002 19:56:35.347992 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.448647 kubelet[1918]: E1002 19:56:35.448590 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.549104 kubelet[1918]: E1002 19:56:35.549066 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.649674 kubelet[1918]: E1002 19:56:35.649616 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.750177 kubelet[1918]: E1002 19:56:35.750123 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.794742 kubelet[1918]: E1002 19:56:35.794689 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:35.851347 kubelet[1918]: E1002 19:56:35.851239 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.951815 kubelet[1918]: E1002 19:56:35.951768 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:35.977365 kubelet[1918]: W1002 19:56:35.977325 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:35.977365 kubelet[1918]: E1002 19:56:35.977365 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:36.052777 kubelet[1918]: E1002 19:56:36.052728 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:36.153701 kubelet[1918]: E1002 19:56:36.153577 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:36.168111 kubelet[1918]: W1002 19:56:36.168075 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.8.20" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:36.168111 kubelet[1918]: E1002 19:56:36.168112 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.20" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:36.254692 kubelet[1918]: E1002 19:56:36.254631 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:36.355401 kubelet[1918]: E1002 19:56:36.355344 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:36.456158 kubelet[1918]: E1002 19:56:36.456039 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:36.556587 kubelet[1918]: E1002 19:56:36.556511 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:36.657150 kubelet[1918]: E1002 19:56:36.657097 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:36.757905 kubelet[1918]: E1002 19:56:36.757736 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:36.795408 kubelet[1918]: E1002 19:56:36.795354 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:36.858299 kubelet[1918]: E1002 19:56:36.858253 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:36.958895 kubelet[1918]: E1002 19:56:36.958840 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:37.059583 kubelet[1918]: E1002 19:56:37.059520 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:37.160456 kubelet[1918]: E1002 19:56:37.160399 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:37.190889 kubelet[1918]: W1002 19:56:37.190847 1918 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:37.190889 kubelet[1918]: E1002 19:56:37.190887 1918 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:37.261443 kubelet[1918]: E1002 19:56:37.261387 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:37.362183 kubelet[1918]: E1002 19:56:37.362058 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:37.462852 kubelet[1918]: E1002 19:56:37.462797 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:37.563539 kubelet[1918]: E1002 19:56:37.563477 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:37.664071 kubelet[1918]: E1002 19:56:37.663938 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:37.764602 kubelet[1918]: E1002 19:56:37.764542 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:37.780941 kubelet[1918]: I1002 19:56:37.780892 1918 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:56:37.796391 kubelet[1918]: E1002 19:56:37.796358 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:37.865762 kubelet[1918]: E1002 19:56:37.865653 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:37.963503 kubelet[1918]: E1002 19:56:37.963375 1918 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.20\" not found" Oct 2 19:56:37.963503 kubelet[1918]: E1002 19:56:37.963374 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:37.965939 kubelet[1918]: E1002 19:56:37.965907 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:38.066646 kubelet[1918]: E1002 19:56:38.066594 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:38.167432 kubelet[1918]: E1002 19:56:38.167370 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:38.199051 kubelet[1918]: E1002 19:56:38.199017 1918 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.20" not found Oct 2 19:56:38.267585 kubelet[1918]: E1002 19:56:38.267445 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:38.368105 kubelet[1918]: E1002 19:56:38.368050 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:38.469111 kubelet[1918]: E1002 19:56:38.469058 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:38.569905 kubelet[1918]: E1002 19:56:38.569858 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:38.670164 kubelet[1918]: E1002 19:56:38.670124 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:38.770855 kubelet[1918]: E1002 19:56:38.770805 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:38.797406 kubelet[1918]: E1002 19:56:38.797373 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:38.871584 kubelet[1918]: E1002 19:56:38.871444 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:38.972251 kubelet[1918]: E1002 19:56:38.972207 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:39.072807 kubelet[1918]: E1002 19:56:39.072750 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:39.173396 kubelet[1918]: E1002 19:56:39.173267 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:39.248377 kubelet[1918]: E1002 19:56:39.248337 1918 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.20" not found Oct 2 19:56:39.273411 kubelet[1918]: E1002 19:56:39.273369 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:39.373540 kubelet[1918]: E1002 19:56:39.373472 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:39.474008 kubelet[1918]: E1002 19:56:39.473874 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:39.574415 kubelet[1918]: E1002 19:56:39.574366 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:39.675429 kubelet[1918]: E1002 19:56:39.675375 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:39.776020 kubelet[1918]: E1002 19:56:39.775883 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:39.798663 kubelet[1918]: E1002 19:56:39.798614 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:39.876039 kubelet[1918]: E1002 19:56:39.875994 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:39.976943 kubelet[1918]: E1002 19:56:39.976886 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:40.077351 kubelet[1918]: E1002 19:56:40.077300 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:40.178372 kubelet[1918]: E1002 19:56:40.178310 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:40.278580 kubelet[1918]: E1002 19:56:40.278512 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:40.379387 kubelet[1918]: E1002 19:56:40.379251 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:40.468224 kubelet[1918]: E1002 19:56:40.468185 1918 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.20\" not found" node="10.200.8.20" Oct 2 19:56:40.480315 kubelet[1918]: E1002 19:56:40.480278 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:40.517613 kubelet[1918]: I1002 19:56:40.517573 1918 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.20" Oct 2 19:56:40.581203 kubelet[1918]: E1002 19:56:40.581148 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:40.649361 kubelet[1918]: I1002 19:56:40.649189 1918 kubelet_node_status.go:73] "Successfully registered node" node="10.200.8.20" Oct 2 19:56:40.681845 kubelet[1918]: E1002 19:56:40.681802 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:40.782820 kubelet[1918]: E1002 19:56:40.782715 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:40.799280 kubelet[1918]: E1002 19:56:40.799242 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:40.883183 kubelet[1918]: E1002 19:56:40.883126 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:40.933000 audit[1699]: USER_END pid=1699 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:40.933977 sudo[1699]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:40.938428 kernel: kauditd_printk_skb: 101 callbacks suppressed Oct 2 19:56:40.938495 kernel: audit: type=1106 audit(1696276600.933:578): pid=1699 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:40.935000 audit[1699]: CRED_DISP pid=1699 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:40.966213 kernel: audit: type=1104 audit(1696276600.935:579): pid=1699 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:40.983651 kubelet[1918]: E1002 19:56:40.983612 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:41.034422 sshd[1696]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:41.035000 audit[1696]: USER_END pid=1696 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:41.037715 systemd[1]: sshd@6-10.200.8.20:22-10.200.12.6:44298.service: Deactivated successfully. Oct 2 19:56:41.038545 systemd[1]: session-9.scope: Deactivated successfully. Oct 2 19:56:41.040298 systemd-logind[1319]: Session 9 logged out. Waiting for processes to exit. Oct 2 19:56:41.041291 systemd-logind[1319]: Removed session 9. Oct 2 19:56:41.035000 audit[1696]: CRED_DISP pid=1696 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:41.070141 kernel: audit: type=1106 audit(1696276601.035:580): pid=1696 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:41.070210 kernel: audit: type=1104 audit(1696276601.035:581): pid=1696 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 19:56:41.070228 kernel: audit: type=1131 audit(1696276601.035:582): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.20:22-10.200.12.6:44298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:41.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.20:22-10.200.12.6:44298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:41.084678 kubelet[1918]: E1002 19:56:41.084594 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:41.185403 kubelet[1918]: E1002 19:56:41.185263 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:41.286281 kubelet[1918]: E1002 19:56:41.286237 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:41.386854 kubelet[1918]: E1002 19:56:41.386813 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:41.487435 kubelet[1918]: E1002 19:56:41.487294 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:41.587865 kubelet[1918]: E1002 19:56:41.587805 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:41.688409 kubelet[1918]: E1002 19:56:41.688354 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:41.789467 kubelet[1918]: E1002 19:56:41.789351 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:41.799784 kubelet[1918]: E1002 19:56:41.799756 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:41.890148 kubelet[1918]: E1002 19:56:41.890106 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:41.990907 kubelet[1918]: E1002 19:56:41.990851 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:42.091601 kubelet[1918]: E1002 19:56:42.091547 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:42.192332 kubelet[1918]: E1002 19:56:42.192282 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:42.293078 kubelet[1918]: E1002 19:56:42.293027 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:42.393684 kubelet[1918]: E1002 19:56:42.393563 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:42.494295 kubelet[1918]: E1002 19:56:42.494247 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:42.594865 kubelet[1918]: E1002 19:56:42.594814 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:42.695581 kubelet[1918]: E1002 19:56:42.695441 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:42.796187 kubelet[1918]: E1002 19:56:42.796138 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:42.800487 kubelet[1918]: E1002 19:56:42.800460 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:42.897155 kubelet[1918]: E1002 19:56:42.897102 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:42.964925 kubelet[1918]: E1002 19:56:42.964803 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:42.997514 kubelet[1918]: E1002 19:56:42.997464 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:43.098397 kubelet[1918]: E1002 19:56:43.098346 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:43.199150 kubelet[1918]: E1002 19:56:43.199097 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:43.299701 kubelet[1918]: E1002 19:56:43.299661 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:43.400191 kubelet[1918]: E1002 19:56:43.400145 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:43.500701 kubelet[1918]: E1002 19:56:43.500653 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:43.601183 kubelet[1918]: E1002 19:56:43.601071 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:43.701575 kubelet[1918]: E1002 19:56:43.701507 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:43.801337 kubelet[1918]: E1002 19:56:43.801282 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:43.802386 kubelet[1918]: E1002 19:56:43.802360 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:43.902995 kubelet[1918]: E1002 19:56:43.902874 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:44.003444 kubelet[1918]: E1002 19:56:44.003395 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:44.104102 kubelet[1918]: E1002 19:56:44.104052 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:44.204553 kubelet[1918]: E1002 19:56:44.204417 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:44.305157 kubelet[1918]: E1002 19:56:44.305111 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:44.405630 kubelet[1918]: E1002 19:56:44.405574 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:44.506452 kubelet[1918]: E1002 19:56:44.506329 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:44.606859 kubelet[1918]: E1002 19:56:44.606790 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:44.707417 kubelet[1918]: E1002 19:56:44.707366 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:44.802002 kubelet[1918]: E1002 19:56:44.801960 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:44.808211 kubelet[1918]: E1002 19:56:44.808182 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:44.908722 kubelet[1918]: E1002 19:56:44.908674 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:45.009632 kubelet[1918]: E1002 19:56:45.009581 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:45.110615 kubelet[1918]: E1002 19:56:45.110494 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:45.210968 kubelet[1918]: E1002 19:56:45.210921 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:45.311867 kubelet[1918]: E1002 19:56:45.311815 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:45.412552 kubelet[1918]: E1002 19:56:45.412432 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:45.512990 kubelet[1918]: E1002 19:56:45.512938 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:45.613654 kubelet[1918]: E1002 19:56:45.613606 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:45.714186 kubelet[1918]: E1002 19:56:45.714064 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:45.802947 kubelet[1918]: E1002 19:56:45.802889 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:45.815086 kubelet[1918]: E1002 19:56:45.815041 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:45.915738 kubelet[1918]: E1002 19:56:45.915643 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:46.015983 kubelet[1918]: E1002 19:56:46.015867 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:46.116995 kubelet[1918]: E1002 19:56:46.116963 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:46.217588 kubelet[1918]: E1002 19:56:46.217520 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:46.318464 kubelet[1918]: E1002 19:56:46.318364 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:46.418957 kubelet[1918]: E1002 19:56:46.418904 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:46.519503 kubelet[1918]: E1002 19:56:46.519441 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:46.619974 kubelet[1918]: E1002 19:56:46.619849 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:46.720402 kubelet[1918]: E1002 19:56:46.720353 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:46.803047 kubelet[1918]: E1002 19:56:46.802995 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:46.821230 kubelet[1918]: E1002 19:56:46.821188 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:46.921998 kubelet[1918]: E1002 19:56:46.921884 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:47.022664 kubelet[1918]: E1002 19:56:47.022617 1918 kubelet.go:2448] "Error getting node" err="node \"10.200.8.20\" not found" Oct 2 19:56:47.122825 kubelet[1918]: I1002 19:56:47.122778 1918 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:56:47.123251 env[1333]: time="2023-10-02T19:56:47.123202874Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:56:47.123763 kubelet[1918]: I1002 19:56:47.123431 1918 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:56:47.123847 kubelet[1918]: E1002 19:56:47.123820 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:47.788886 kubelet[1918]: E1002 19:56:47.788833 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:47.801108 kubelet[1918]: I1002 19:56:47.801054 1918 apiserver.go:52] "Watching apiserver" Oct 2 19:56:47.803311 kubelet[1918]: E1002 19:56:47.803282 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:47.803660 kubelet[1918]: I1002 19:56:47.803632 1918 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:56:47.803792 kubelet[1918]: I1002 19:56:47.803730 1918 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:56:47.810508 systemd[1]: Created slice kubepods-burstable-pod331946e9_4a0f_47a0_a839_0388e94dee69.slice. Oct 2 19:56:47.824925 systemd[1]: Created slice kubepods-besteffort-pod7b8a3a56_b4f9_4e3d_ac5d_4da52117c1d7.slice. Oct 2 19:56:47.928300 kubelet[1918]: I1002 19:56:47.928233 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/331946e9-4a0f-47a0-a839-0388e94dee69-clustermesh-secrets\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928300 kubelet[1918]: I1002 19:56:47.928306 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-host-proc-sys-net\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928582 kubelet[1918]: I1002 19:56:47.928344 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/331946e9-4a0f-47a0-a839-0388e94dee69-hubble-tls\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928582 kubelet[1918]: I1002 19:56:47.928382 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98qck\" (UniqueName: \"kubernetes.io/projected/331946e9-4a0f-47a0-a839-0388e94dee69-kube-api-access-98qck\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928582 kubelet[1918]: I1002 19:56:47.928425 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-hostproc\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928582 kubelet[1918]: I1002 19:56:47.928458 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-lib-modules\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928582 kubelet[1918]: I1002 19:56:47.928487 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-xtables-lock\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928582 kubelet[1918]: I1002 19:56:47.928541 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-cgroup\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928930 kubelet[1918]: I1002 19:56:47.928589 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cni-path\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928930 kubelet[1918]: I1002 19:56:47.928627 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-config-path\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928930 kubelet[1918]: I1002 19:56:47.928677 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-etc-cni-netd\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928930 kubelet[1918]: I1002 19:56:47.928722 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzg6t\" (UniqueName: \"kubernetes.io/projected/7b8a3a56-b4f9-4e3d-ac5d-4da52117c1d7-kube-api-access-qzg6t\") pod \"kube-proxy-9qx7w\" (UID: \"7b8a3a56-b4f9-4e3d-ac5d-4da52117c1d7\") " pod="kube-system/kube-proxy-9qx7w" Oct 2 19:56:47.928930 kubelet[1918]: I1002 19:56:47.928758 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-run\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.928930 kubelet[1918]: I1002 19:56:47.928792 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-bpf-maps\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.929254 kubelet[1918]: I1002 19:56:47.928830 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-host-proc-sys-kernel\") pod \"cilium-jbmjb\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " pod="kube-system/cilium-jbmjb" Oct 2 19:56:47.929254 kubelet[1918]: I1002 19:56:47.928868 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b8a3a56-b4f9-4e3d-ac5d-4da52117c1d7-kube-proxy\") pod \"kube-proxy-9qx7w\" (UID: \"7b8a3a56-b4f9-4e3d-ac5d-4da52117c1d7\") " pod="kube-system/kube-proxy-9qx7w" Oct 2 19:56:47.929254 kubelet[1918]: I1002 19:56:47.928908 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b8a3a56-b4f9-4e3d-ac5d-4da52117c1d7-xtables-lock\") pod \"kube-proxy-9qx7w\" (UID: \"7b8a3a56-b4f9-4e3d-ac5d-4da52117c1d7\") " pod="kube-system/kube-proxy-9qx7w" Oct 2 19:56:47.929254 kubelet[1918]: I1002 19:56:47.928957 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b8a3a56-b4f9-4e3d-ac5d-4da52117c1d7-lib-modules\") pod \"kube-proxy-9qx7w\" (UID: \"7b8a3a56-b4f9-4e3d-ac5d-4da52117c1d7\") " pod="kube-system/kube-proxy-9qx7w" Oct 2 19:56:47.929254 kubelet[1918]: I1002 19:56:47.928974 1918 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:56:47.965288 kubelet[1918]: E1002 19:56:47.965258 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:48.133666 env[1333]: time="2023-10-02T19:56:48.133612639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9qx7w,Uid:7b8a3a56-b4f9-4e3d-ac5d-4da52117c1d7,Namespace:kube-system,Attempt:0,}" Oct 2 19:56:48.423622 env[1333]: time="2023-10-02T19:56:48.423503837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jbmjb,Uid:331946e9-4a0f-47a0-a839-0388e94dee69,Namespace:kube-system,Attempt:0,}" Oct 2 19:56:48.803698 kubelet[1918]: E1002 19:56:48.803663 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:49.043388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204360533.mount: Deactivated successfully. Oct 2 19:56:49.078396 env[1333]: time="2023-10-02T19:56:49.077986697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:49.083057 env[1333]: time="2023-10-02T19:56:49.083016939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:49.101037 env[1333]: time="2023-10-02T19:56:49.100995717Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:49.104543 env[1333]: time="2023-10-02T19:56:49.104493024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:49.111129 env[1333]: time="2023-10-02T19:56:49.111092304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:49.115257 env[1333]: time="2023-10-02T19:56:49.115226067Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:49.119411 env[1333]: time="2023-10-02T19:56:49.119379231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:49.122775 env[1333]: time="2023-10-02T19:56:49.122745827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:49.193847 env[1333]: time="2023-10-02T19:56:49.192821480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:56:49.193847 env[1333]: time="2023-10-02T19:56:49.192885085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:56:49.193847 env[1333]: time="2023-10-02T19:56:49.192901086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:56:49.193847 env[1333]: time="2023-10-02T19:56:49.193067901Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd74fb91846ef8a558d822006a04cb75097cc4da4249d0a0eef23f6db505ecc6 pid=2003 runtime=io.containerd.runc.v2 Oct 2 19:56:49.202287 env[1333]: time="2023-10-02T19:56:49.202228305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:56:49.202425 env[1333]: time="2023-10-02T19:56:49.202311413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:56:49.202425 env[1333]: time="2023-10-02T19:56:49.202340515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:56:49.202570 env[1333]: time="2023-10-02T19:56:49.202503530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b pid=2021 runtime=io.containerd.runc.v2 Oct 2 19:56:49.222023 systemd[1]: Started cri-containerd-254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b.scope. Oct 2 19:56:49.228536 systemd[1]: Started cri-containerd-fd74fb91846ef8a558d822006a04cb75097cc4da4249d0a0eef23f6db505ecc6.scope. Oct 2 19:56:49.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.270857 kernel: audit: type=1400 audit(1696276609.242:583): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.270961 kernel: audit: type=1400 audit(1696276609.242:584): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.283877 kernel: audit: type=1400 audit(1696276609.242:585): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.296516 kernel: audit: type=1400 audit(1696276609.242:586): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.321755 kernel: audit: type=1400 audit(1696276609.242:587): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.321825 kernel: audit: type=1400 audit(1696276609.242:588): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.322580 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:56:49.331143 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:56:49.331217 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:56:49.331238 kernel: audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:56:49.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.269000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.269000 audit: BPF prog-id=68 op=LOAD Oct 2 19:56:49.269000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.269000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.269000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.269000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.269000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.269000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.269000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.269000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.269000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00011fc48 a2=10 a3=1c items=0 ppid=2021 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:49.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235346430383833346333363330313532623333363533313237333835 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00011f6b0 a2=3c a3=c items=0 ppid=2021 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:49.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235346430383833346333363330313532623333363533313237333835 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.282000 audit: BPF prog-id=69 op=LOAD Oct 2 19:56:49.282000 audit[2038]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00011f9d8 a2=78 a3=c000280cd0 items=0 ppid=2021 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:49.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235346430383833346333363330313532623333363533313237333835 Oct 2 19:56:49.295000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.295000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.295000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.295000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.295000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.295000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.295000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.295000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.307000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.307000 audit: BPF prog-id=70 op=LOAD Oct 2 19:56:49.295000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.295000 audit: BPF prog-id=71 op=LOAD Oct 2 19:56:49.295000 audit[2038]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00011f770 a2=78 a3=c000280d18 items=0 ppid=2021 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:49.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235346430383833346333363330313532623333363533313237333835 Oct 2 19:56:49.320000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.320000 audit[2020]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2003 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:49.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664373466623931383436656638613535386438323230303661303463 Oct 2 19:56:49.320000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.320000 audit[2020]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2003 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:49.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664373466623931383436656638613535386438323230303661303463 Oct 2 19:56:49.320000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.320000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.320000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.320000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.320000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.320000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.320000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.320000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.320000 audit: BPF prog-id=73 op=LOAD Oct 2 19:56:49.320000 audit[2020]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0001dd3b0 items=0 ppid=2003 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:49.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664373466623931383436656638613535386438323230303661303463 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit: BPF prog-id=74 op=LOAD Oct 2 19:56:49.340000 audit[2020]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0001dd3f8 items=0 ppid=2003 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:49.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664373466623931383436656638613535386438323230303661303463 Oct 2 19:56:49.340000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:56:49.340000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { perfmon } for pid=2020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit[2020]: AVC avc: denied { bpf } for pid=2020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:49.340000 audit: BPF prog-id=75 op=LOAD Oct 2 19:56:49.340000 audit[2020]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0001dd808 items=0 ppid=2003 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:49.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664373466623931383436656638613535386438323230303661303463 Oct 2 19:56:49.359155 env[1333]: time="2023-10-02T19:56:49.359112580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jbmjb,Uid:331946e9-4a0f-47a0-a839-0388e94dee69,Namespace:kube-system,Attempt:0,} returns sandbox id \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\"" Oct 2 19:56:49.361957 env[1333]: time="2023-10-02T19:56:49.361930427Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:56:49.363576 env[1333]: time="2023-10-02T19:56:49.363520067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9qx7w,Uid:7b8a3a56-b4f9-4e3d-ac5d-4da52117c1d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd74fb91846ef8a558d822006a04cb75097cc4da4249d0a0eef23f6db505ecc6\"" Oct 2 19:56:49.804409 kubelet[1918]: E1002 19:56:49.804313 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:50.804765 kubelet[1918]: E1002 19:56:50.804729 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:51.805558 kubelet[1918]: E1002 19:56:51.805508 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:52.806246 kubelet[1918]: E1002 19:56:52.806189 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:52.967046 kubelet[1918]: E1002 19:56:52.967006 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:53.806387 kubelet[1918]: E1002 19:56:53.806325 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:54.806738 kubelet[1918]: E1002 19:56:54.806696 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:55.807276 kubelet[1918]: E1002 19:56:55.807201 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:55.831768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount71577255.mount: Deactivated successfully. Oct 2 19:56:56.807708 kubelet[1918]: E1002 19:56:56.807627 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:57.808257 kubelet[1918]: E1002 19:56:57.808220 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:57.967466 kubelet[1918]: E1002 19:56:57.967433 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:58.564687 env[1333]: time="2023-10-02T19:56:58.564638574Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:58.574038 env[1333]: time="2023-10-02T19:56:58.573993685Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:58.578673 env[1333]: time="2023-10-02T19:56:58.578637938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:58.580606 env[1333]: time="2023-10-02T19:56:58.580470277Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\"" Oct 2 19:56:58.583978 env[1333]: time="2023-10-02T19:56:58.583938940Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:56:58.585344 env[1333]: time="2023-10-02T19:56:58.585309844Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:56:58.615888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3956940029.mount: Deactivated successfully. Oct 2 19:56:58.621917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272855028.mount: Deactivated successfully. Oct 2 19:56:58.644613 env[1333]: time="2023-10-02T19:56:58.644565744Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\"" Oct 2 19:56:58.645345 env[1333]: time="2023-10-02T19:56:58.645314801Z" level=info msg="StartContainer for \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\"" Oct 2 19:56:58.664545 systemd[1]: Started cri-containerd-171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2.scope. Oct 2 19:56:58.675078 systemd[1]: cri-containerd-171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2.scope: Deactivated successfully. Oct 2 19:56:58.809312 kubelet[1918]: E1002 19:56:58.809263 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:59.612435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2-rootfs.mount: Deactivated successfully. Oct 2 19:56:59.809727 kubelet[1918]: E1002 19:56:59.809674 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:00.669764 env[1333]: time="2023-10-02T19:57:00.669595230Z" level=error msg="get state for 171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2" error="context deadline exceeded: unknown" Oct 2 19:57:00.669764 env[1333]: time="2023-10-02T19:57:00.669739941Z" level=warning msg="unknown status" status=0 Oct 2 19:57:00.810631 kubelet[1918]: E1002 19:57:00.810577 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:01.811123 kubelet[1918]: E1002 19:57:01.811083 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:02.341424 env[1333]: time="2023-10-02T19:57:02.341363842Z" level=info msg="shim disconnected" id=171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2 Oct 2 19:57:02.341424 env[1333]: time="2023-10-02T19:57:02.341419446Z" level=warning msg="cleaning up after shim disconnected" id=171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2 namespace=k8s.io Oct 2 19:57:02.342001 env[1333]: time="2023-10-02T19:57:02.341432547Z" level=info msg="cleaning up dead shim" Oct 2 19:57:02.350007 env[1333]: time="2023-10-02T19:57:02.349969459Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:57:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2106 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:57:02Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:57:02.350283 env[1333]: time="2023-10-02T19:57:02.350195475Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Oct 2 19:57:02.350486 env[1333]: time="2023-10-02T19:57:02.350455794Z" level=error msg="Failed to pipe stderr of container \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\"" error="reading from a closed fifo" Oct 2 19:57:02.356614 env[1333]: time="2023-10-02T19:57:02.356569232Z" level=error msg="Failed to pipe stdout of container \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\"" error="reading from a closed fifo" Oct 2 19:57:02.361756 env[1333]: time="2023-10-02T19:57:02.361714201Z" level=error msg="StartContainer for \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:57:02.361982 kubelet[1918]: E1002 19:57:02.361959 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2" Oct 2 19:57:02.362333 kubelet[1918]: E1002 19:57:02.362307 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:57:02.362333 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:57:02.362333 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 19:57:02.362333 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-98qck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:57:02.362585 kubelet[1918]: E1002 19:57:02.362361 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:57:02.811637 kubelet[1918]: E1002 19:57:02.811599 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:02.968358 kubelet[1918]: E1002 19:57:02.968323 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:03.164836 env[1333]: time="2023-10-02T19:57:03.164459966Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:57:03.221224 env[1333]: time="2023-10-02T19:57:03.221173875Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\"" Oct 2 19:57:03.222357 env[1333]: time="2023-10-02T19:57:03.222318855Z" level=info msg="StartContainer for \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\"" Oct 2 19:57:03.254562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006618653.mount: Deactivated successfully. Oct 2 19:57:03.273972 systemd[1]: Started cri-containerd-0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b.scope. Oct 2 19:57:03.282254 systemd[1]: cri-containerd-0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b.scope: Deactivated successfully. Oct 2 19:57:03.282665 systemd[1]: Stopped cri-containerd-0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b.scope. Oct 2 19:57:03.488934 env[1333]: time="2023-10-02T19:57:03.488792489Z" level=info msg="shim disconnected" id=0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b Oct 2 19:57:03.488934 env[1333]: time="2023-10-02T19:57:03.488847493Z" level=warning msg="cleaning up after shim disconnected" id=0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b namespace=k8s.io Oct 2 19:57:03.488934 env[1333]: time="2023-10-02T19:57:03.488859994Z" level=info msg="cleaning up dead shim" Oct 2 19:57:03.498194 env[1333]: time="2023-10-02T19:57:03.498148750Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:57:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2144 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:57:03Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:57:03.498426 env[1333]: time="2023-10-02T19:57:03.498376166Z" level=error msg="copy shim log" error="read /proc/self/fd/50: file already closed" Oct 2 19:57:03.498625 env[1333]: time="2023-10-02T19:57:03.498586981Z" level=error msg="Failed to pipe stdout of container \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\"" error="reading from a closed fifo" Oct 2 19:57:03.503684 env[1333]: time="2023-10-02T19:57:03.503630138Z" level=error msg="Failed to pipe stderr of container \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\"" error="reading from a closed fifo" Oct 2 19:57:03.512277 env[1333]: time="2023-10-02T19:57:03.512232946Z" level=error msg="StartContainer for \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:57:03.512843 kubelet[1918]: E1002 19:57:03.512603 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b" Oct 2 19:57:03.512843 kubelet[1918]: E1002 19:57:03.512758 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:57:03.512843 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:57:03.512843 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 19:57:03.513086 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-98qck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:57:03.513218 kubelet[1918]: E1002 19:57:03.512811 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:57:03.812809 kubelet[1918]: E1002 19:57:03.812737 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:03.908879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b-rootfs.mount: Deactivated successfully. Oct 2 19:57:03.918126 env[1333]: time="2023-10-02T19:57:03.918080029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:57:03.926512 env[1333]: time="2023-10-02T19:57:03.926418019Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:57:03.929730 env[1333]: time="2023-10-02T19:57:03.929699951Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:57:03.935879 env[1333]: time="2023-10-02T19:57:03.935844985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:57:03.936289 env[1333]: time="2023-10-02T19:57:03.936257814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 19:57:03.938126 env[1333]: time="2023-10-02T19:57:03.938069142Z" level=info msg="CreateContainer within sandbox \"fd74fb91846ef8a558d822006a04cb75097cc4da4249d0a0eef23f6db505ecc6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:57:03.991742 env[1333]: time="2023-10-02T19:57:03.991698932Z" level=info msg="CreateContainer within sandbox \"fd74fb91846ef8a558d822006a04cb75097cc4da4249d0a0eef23f6db505ecc6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2047c72353b22c739cf444fe036bfec0491cafc3af1e2e58f9194b50546504f6\"" Oct 2 19:57:03.992281 env[1333]: time="2023-10-02T19:57:03.992163265Z" level=info msg="StartContainer for \"2047c72353b22c739cf444fe036bfec0491cafc3af1e2e58f9194b50546504f6\"" Oct 2 19:57:04.008518 systemd[1]: Started cri-containerd-2047c72353b22c739cf444fe036bfec0491cafc3af1e2e58f9194b50546504f6.scope. Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.028292 kernel: kauditd_printk_skb: 132 callbacks suppressed Oct 2 19:57:04.028381 kernel: audit: type=1400 audit(1696276624.023:616): avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.023000 audit[2164]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001496b0 a2=3c a3=8 items=0 ppid=2003 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.061779 kernel: audit: type=1300 audit(1696276624.023:616): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001496b0 a2=3c a3=8 items=0 ppid=2003 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.061874 kernel: audit: type=1327 audit(1696276624.023:616): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230343763373233353362323263373339636634343466653033366266 Oct 2 19:57:04.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230343763373233353362323263373339636634343466653033366266 Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.092473 kernel: audit: type=1400 audit(1696276624.023:617): avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.118863 kernel: audit: type=1400 audit(1696276624.023:617): avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.118941 kernel: audit: type=1400 audit(1696276624.023:617): avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.126544 kernel: audit: type=1400 audit(1696276624.023:617): avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.163063 env[1333]: time="2023-10-02T19:57:04.163027886Z" level=info msg="StartContainer for \"2047c72353b22c739cf444fe036bfec0491cafc3af1e2e58f9194b50546504f6\" returns successfully" Oct 2 19:57:04.165163 kernel: audit: type=1400 audit(1696276624.023:617): avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.165249 kernel: audit: type=1400 audit(1696276624.023:617): avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.165281 kernel: audit: type=1400 audit(1696276624.023:617): avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.168125 kubelet[1918]: I1002 19:57:04.167598 1918 scope.go:115] "RemoveContainer" containerID="171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2" Oct 2 19:57:04.168125 kubelet[1918]: I1002 19:57:04.168109 1918 scope.go:115] "RemoveContainer" containerID="171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2" Oct 2 19:57:04.169433 env[1333]: time="2023-10-02T19:57:04.169408331Z" level=info msg="RemoveContainer for \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\"" Oct 2 19:57:04.170167 env[1333]: time="2023-10-02T19:57:04.170147382Z" level=info msg="RemoveContainer for \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\"" Oct 2 19:57:04.170327 env[1333]: time="2023-10-02T19:57:04.170297993Z" level=error msg="RemoveContainer for \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\" failed" error="failed to set removing state for container \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\": container is already in removing state" Oct 2 19:57:04.170809 kubelet[1918]: E1002 19:57:04.170500 1918 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\": container is already in removing state" containerID="171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2" Oct 2 19:57:04.170809 kubelet[1918]: E1002 19:57:04.170550 1918 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2": container is already in removing state; Skipping pod "cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)" Oct 2 19:57:04.170809 kubelet[1918]: E1002 19:57:04.170770 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.023000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.023000 audit: BPF prog-id=76 op=LOAD Oct 2 19:57:04.023000 audit[2164]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001499d8 a2=78 a3=c0002187a0 items=0 ppid=2003 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230343763373233353362323263373339636634343466653033366266 Oct 2 19:57:04.061000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.061000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.061000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.061000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.061000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.061000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.061000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.061000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.061000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.061000 audit: BPF prog-id=77 op=LOAD Oct 2 19:57:04.061000 audit[2164]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000149770 a2=78 a3=c0002187e8 items=0 ppid=2003 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230343763373233353362323263373339636634343466653033366266 Oct 2 19:57:04.079000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:57:04.079000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:57:04.079000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.079000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.079000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.079000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.079000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.079000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.079000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.079000 audit[2164]: AVC avc: denied { perfmon } for pid=2164 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.079000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.079000 audit[2164]: AVC avc: denied { bpf } for pid=2164 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:57:04.079000 audit: BPF prog-id=78 op=LOAD Oct 2 19:57:04.079000 audit[2164]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000149c30 a2=78 a3=c000218878 items=0 ppid=2003 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230343763373233353362323263373339636634343466653033366266 Oct 2 19:57:04.180899 env[1333]: time="2023-10-02T19:57:04.180865230Z" level=info msg="RemoveContainer for \"171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2\" returns successfully" Oct 2 19:57:04.209498 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:57:04.209593 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:57:04.209621 kernel: IPVS: ipvs loaded. Oct 2 19:57:04.233625 kernel: IPVS: [rr] scheduler registered. Oct 2 19:57:04.242554 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:57:04.252741 kernel: IPVS: [sh] scheduler registered. Oct 2 19:57:04.316000 audit[2222]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.316000 audit[2222]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea58eab30 a2=0 a3=7ffea58eab1c items=0 ppid=2174 pid=2222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.316000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:57:04.317000 audit[2223]: NETFILTER_CFG table=mangle:40 family=10 entries=1 op=nft_register_chain pid=2223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.317000 audit[2223]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfaa32fe0 a2=0 a3=7ffdfaa32fcc items=0 ppid=2174 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:57:04.318000 audit[2224]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.318000 audit[2224]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5df13d90 a2=0 a3=7ffe5df13d7c items=0 ppid=2174 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:57:04.319000 audit[2225]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.319000 audit[2225]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2a990ae0 a2=0 a3=7ffe2a990acc items=0 ppid=2174 pid=2225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:57:04.320000 audit[2226]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.320000 audit[2226]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc91f87420 a2=0 a3=7ffc91f8740c items=0 ppid=2174 pid=2226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:57:04.321000 audit[2227]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.321000 audit[2227]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7a3f7a80 a2=0 a3=7ffe7a3f7a6c items=0 ppid=2174 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.321000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:57:04.421000 audit[2228]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.421000 audit[2228]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc03481e30 a2=0 a3=7ffc03481e1c items=0 ppid=2174 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:57:04.426000 audit[2230]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.426000 audit[2230]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcf4e346e0 a2=0 a3=7ffcf4e346cc items=0 ppid=2174 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:57:04.430000 audit[2233]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=2233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.430000 audit[2233]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe8fefaed0 a2=0 a3=7ffe8fefaebc items=0 ppid=2174 pid=2233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:57:04.431000 audit[2234]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_chain pid=2234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.431000 audit[2234]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd30ea3ba0 a2=0 a3=7ffd30ea3b8c items=0 ppid=2174 pid=2234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:57:04.435000 audit[2236]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2236 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.435000 audit[2236]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb24d2e50 a2=0 a3=7ffcb24d2e3c items=0 ppid=2174 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:57:04.436000 audit[2237]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.436000 audit[2237]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe08bba500 a2=0 a3=7ffe08bba4ec items=0 ppid=2174 pid=2237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:57:04.439000 audit[2239]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.439000 audit[2239]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd8e374860 a2=0 a3=7ffd8e37484c items=0 ppid=2174 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.439000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:57:04.442000 audit[2242]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.442000 audit[2242]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff013775c0 a2=0 a3=7fff013775ac items=0 ppid=2174 pid=2242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:57:04.443000 audit[2243]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.443000 audit[2243]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4944de30 a2=0 a3=7ffc4944de1c items=0 ppid=2174 pid=2243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:57:04.445000 audit[2245]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2245 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.445000 audit[2245]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc704e3150 a2=0 a3=7ffc704e313c items=0 ppid=2174 pid=2245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:57:04.446000 audit[2246]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.446000 audit[2246]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd829ef50 a2=0 a3=7ffdd829ef3c items=0 ppid=2174 pid=2246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:57:04.449000 audit[2248]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.449000 audit[2248]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdec883380 a2=0 a3=7ffdec88336c items=0 ppid=2174 pid=2248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:57:04.452000 audit[2251]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.452000 audit[2251]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff233005d0 a2=0 a3=7fff233005bc items=0 ppid=2174 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:57:04.456000 audit[2254]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2254 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.456000 audit[2254]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce4a20b50 a2=0 a3=7ffce4a20b3c items=0 ppid=2174 pid=2254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:57:04.457000 audit[2255]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=2255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.457000 audit[2255]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd81f7f760 a2=0 a3=7ffd81f7f74c items=0 ppid=2174 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.457000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:57:04.459000 audit[2257]: NETFILTER_CFG table=nat:60 family=2 entries=2 op=nft_register_chain pid=2257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.459000 audit[2257]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffef21f45a0 a2=0 a3=7ffef21f458c items=0 ppid=2174 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:57:04.462000 audit[2260]: NETFILTER_CFG table=nat:61 family=2 entries=2 op=nft_register_chain pid=2260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:57:04.462000 audit[2260]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc1b9f6570 a2=0 a3=7ffc1b9f655c items=0 ppid=2174 pid=2260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:57:04.494000 audit[2264]: NETFILTER_CFG table=filter:62 family=2 entries=6 op=nft_register_rule pid=2264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:57:04.494000 audit[2264]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffca59626e0 a2=0 a3=7ffca59626cc items=0 ppid=2174 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.494000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:57:04.517000 audit[2264]: NETFILTER_CFG table=nat:63 family=2 entries=17 op=nft_register_chain pid=2264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:57:04.517000 audit[2264]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffca59626e0 a2=0 a3=7ffca59626cc items=0 ppid=2174 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:57:04.520000 audit[2268]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.520000 audit[2268]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffde454a000 a2=0 a3=7ffde4549fec items=0 ppid=2174 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:57:04.523000 audit[2270]: NETFILTER_CFG table=filter:65 family=10 entries=2 op=nft_register_chain pid=2270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.523000 audit[2270]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe5ee84210 a2=0 a3=7ffe5ee841fc items=0 ppid=2174 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.523000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:57:04.526000 audit[2273]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.526000 audit[2273]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffff4ba4730 a2=0 a3=7ffff4ba471c items=0 ppid=2174 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:57:04.527000 audit[2274]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.527000 audit[2274]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4816eb10 a2=0 a3=7ffd4816eafc items=0 ppid=2174 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.527000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:57:04.530000 audit[2276]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_rule pid=2276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.530000 audit[2276]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff9e28ada0 a2=0 a3=7fff9e28ad8c items=0 ppid=2174 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:57:04.531000 audit[2277]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_chain pid=2277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.531000 audit[2277]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec1295720 a2=0 a3=7ffec129570c items=0 ppid=2174 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:57:04.533000 audit[2279]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_rule pid=2279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.533000 audit[2279]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffee95a390 a2=0 a3=7fffee95a37c items=0 ppid=2174 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.533000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:57:04.537000 audit[2282]: NETFILTER_CFG table=filter:71 family=10 entries=2 op=nft_register_chain pid=2282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.537000 audit[2282]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffff831bd10 a2=0 a3=7ffff831bcfc items=0 ppid=2174 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.537000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:57:04.538000 audit[2283]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=2283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.538000 audit[2283]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1298d890 a2=0 a3=7fff1298d87c items=0 ppid=2174 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.538000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:57:04.540000 audit[2285]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.540000 audit[2285]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe5b2dfa50 a2=0 a3=7ffe5b2dfa3c items=0 ppid=2174 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:57:04.541000 audit[2286]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_chain pid=2286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.541000 audit[2286]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3991bbf0 a2=0 a3=7ffc3991bbdc items=0 ppid=2174 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.541000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:57:04.544000 audit[2288]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_rule pid=2288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.544000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0070eeb0 a2=0 a3=7ffc0070ee9c items=0 ppid=2174 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.544000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:57:04.547000 audit[2292]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2292 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.547000 audit[2292]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe5bf319e0 a2=0 a3=7ffe5bf319cc items=0 ppid=2174 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.547000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:57:04.551000 audit[2295]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.551000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc96d7a40 a2=0 a3=7ffcc96d7a2c items=0 ppid=2174 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:57:04.552000 audit[2296]: NETFILTER_CFG table=nat:78 family=10 entries=1 op=nft_register_chain pid=2296 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.552000 audit[2296]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffebe3e71d0 a2=0 a3=7ffebe3e71bc items=0 ppid=2174 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.552000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:57:04.554000 audit[2298]: NETFILTER_CFG table=nat:79 family=10 entries=2 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.554000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc9e050550 a2=0 a3=7ffc9e05053c items=0 ppid=2174 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.554000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:57:04.557000 audit[2301]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:57:04.557000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff9c8618f0 a2=0 a3=7fff9c8618dc items=0 ppid=2174 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:57:04.562000 audit[2305]: NETFILTER_CFG table=filter:81 family=10 entries=3 op=nft_register_rule pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:57:04.562000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd68474b90 a2=0 a3=7ffd68474b7c items=0 ppid=2174 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.562000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:57:04.562000 audit[2305]: NETFILTER_CFG table=nat:82 family=10 entries=10 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:57:04.562000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7ffd68474b90 a2=0 a3=7ffd68474b7c items=0 ppid=2174 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:57:04.562000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:57:04.813520 kubelet[1918]: E1002 19:57:04.813462 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:05.171930 kubelet[1918]: E1002 19:57:05.171803 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:57:05.345674 kubelet[1918]: W1002 19:57:05.345618 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod331946e9_4a0f_47a0_a839_0388e94dee69.slice/cri-containerd-171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2.scope WatchSource:0}: container "171f18b9988be34e83e18b207a3fe555a2404beedefcf091110f1a8cbf0a28f2" in namespace "k8s.io": not found Oct 2 19:57:05.814350 kubelet[1918]: E1002 19:57:05.814293 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:06.814832 kubelet[1918]: E1002 19:57:06.814774 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:07.789115 kubelet[1918]: E1002 19:57:07.789076 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:07.815314 kubelet[1918]: E1002 19:57:07.815260 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:07.969746 kubelet[1918]: E1002 19:57:07.969707 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:08.453272 kubelet[1918]: W1002 19:57:08.453224 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod331946e9_4a0f_47a0_a839_0388e94dee69.slice/cri-containerd-0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b.scope WatchSource:0}: task 0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b not found: not found Oct 2 19:57:08.815505 kubelet[1918]: E1002 19:57:08.815448 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:09.816214 kubelet[1918]: E1002 19:57:09.816153 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:10.816790 kubelet[1918]: E1002 19:57:10.816726 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:11.817634 kubelet[1918]: E1002 19:57:11.817581 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:12.817877 kubelet[1918]: E1002 19:57:12.817819 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:12.971216 kubelet[1918]: E1002 19:57:12.971178 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:13.818061 kubelet[1918]: E1002 19:57:13.818005 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:14.818297 kubelet[1918]: E1002 19:57:14.818240 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:15.822958 kubelet[1918]: E1002 19:57:15.822912 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:16.823909 kubelet[1918]: E1002 19:57:16.823844 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:17.824804 kubelet[1918]: E1002 19:57:17.824741 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:17.972122 kubelet[1918]: E1002 19:57:17.972085 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:18.092051 env[1333]: time="2023-10-02T19:57:18.091927397Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:57:18.135463 env[1333]: time="2023-10-02T19:57:18.135414867Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\"" Oct 2 19:57:18.136169 env[1333]: time="2023-10-02T19:57:18.136132809Z" level=info msg="StartContainer for \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\"" Oct 2 19:57:18.161999 systemd[1]: Started cri-containerd-29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb.scope. Oct 2 19:57:18.173300 systemd[1]: cri-containerd-29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb.scope: Deactivated successfully. Oct 2 19:57:18.511176 env[1333]: time="2023-10-02T19:57:18.511024259Z" level=info msg="shim disconnected" id=29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb Oct 2 19:57:18.511742 env[1333]: time="2023-10-02T19:57:18.511707300Z" level=warning msg="cleaning up after shim disconnected" id=29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb namespace=k8s.io Oct 2 19:57:18.511927 env[1333]: time="2023-10-02T19:57:18.511900811Z" level=info msg="cleaning up dead shim" Oct 2 19:57:18.520250 env[1333]: time="2023-10-02T19:57:18.520215402Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:57:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2329 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:57:18Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:57:18.520505 env[1333]: time="2023-10-02T19:57:18.520456617Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:57:18.520726 env[1333]: time="2023-10-02T19:57:18.520695131Z" level=error msg="Failed to pipe stdout of container \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\"" error="reading from a closed fifo" Oct 2 19:57:18.520821 env[1333]: time="2023-10-02T19:57:18.520787036Z" level=error msg="Failed to pipe stderr of container \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\"" error="reading from a closed fifo" Oct 2 19:57:18.525710 env[1333]: time="2023-10-02T19:57:18.525673725Z" level=error msg="StartContainer for \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:57:18.525928 kubelet[1918]: E1002 19:57:18.525905 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb" Oct 2 19:57:18.526107 kubelet[1918]: E1002 19:57:18.526089 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:57:18.526107 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:57:18.526107 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 19:57:18.526107 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-98qck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:57:18.526336 kubelet[1918]: E1002 19:57:18.526142 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:57:18.825615 kubelet[1918]: E1002 19:57:18.825580 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:19.121682 systemd[1]: run-containerd-runc-k8s.io-29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb-runc.zOIF45.mount: Deactivated successfully. Oct 2 19:57:19.121827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb-rootfs.mount: Deactivated successfully. Oct 2 19:57:19.196207 kubelet[1918]: I1002 19:57:19.196092 1918 scope.go:115] "RemoveContainer" containerID="0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b" Oct 2 19:57:19.196454 kubelet[1918]: I1002 19:57:19.196432 1918 scope.go:115] "RemoveContainer" containerID="0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b" Oct 2 19:57:19.197969 env[1333]: time="2023-10-02T19:57:19.197905934Z" level=info msg="RemoveContainer for \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\"" Oct 2 19:57:19.198403 env[1333]: time="2023-10-02T19:57:19.198367661Z" level=info msg="RemoveContainer for \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\"" Oct 2 19:57:19.198517 env[1333]: time="2023-10-02T19:57:19.198469567Z" level=error msg="RemoveContainer for \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\" failed" error="failed to set removing state for container \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\": container is already in removing state" Oct 2 19:57:19.198742 kubelet[1918]: E1002 19:57:19.198717 1918 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\": container is already in removing state" containerID="0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b" Oct 2 19:57:19.198840 kubelet[1918]: E1002 19:57:19.198756 1918 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b": container is already in removing state; Skipping pod "cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)" Oct 2 19:57:19.199060 kubelet[1918]: E1002 19:57:19.199038 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:57:19.209811 env[1333]: time="2023-10-02T19:57:19.209768528Z" level=info msg="RemoveContainer for \"0caef4b670587fc8e15a8187a386d212f08a9e3e2ddb54f22ba4158d6c05d76b\" returns successfully" Oct 2 19:57:19.826350 kubelet[1918]: E1002 19:57:19.826288 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:20.826940 kubelet[1918]: E1002 19:57:20.826885 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:21.617758 kubelet[1918]: W1002 19:57:21.617713 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod331946e9_4a0f_47a0_a839_0388e94dee69.slice/cri-containerd-29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb.scope WatchSource:0}: task 29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb not found: not found Oct 2 19:57:21.827102 kubelet[1918]: E1002 19:57:21.827035 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:22.827998 kubelet[1918]: E1002 19:57:22.827937 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:22.972870 kubelet[1918]: E1002 19:57:22.972839 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:23.828906 kubelet[1918]: E1002 19:57:23.828846 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:24.829858 kubelet[1918]: E1002 19:57:24.829799 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:25.830616 kubelet[1918]: E1002 19:57:25.830555 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:26.831627 kubelet[1918]: E1002 19:57:26.831573 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.788675 kubelet[1918]: E1002 19:57:27.788618 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.831818 kubelet[1918]: E1002 19:57:27.831765 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.973611 kubelet[1918]: E1002 19:57:27.973580 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:28.832324 kubelet[1918]: E1002 19:57:28.832264 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:29.832968 kubelet[1918]: E1002 19:57:29.832908 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:30.090081 kubelet[1918]: E1002 19:57:30.089409 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:57:30.833696 kubelet[1918]: E1002 19:57:30.833637 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:31.834806 kubelet[1918]: E1002 19:57:31.834747 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:32.835379 kubelet[1918]: E1002 19:57:32.835317 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:32.974828 kubelet[1918]: E1002 19:57:32.974780 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:33.836229 kubelet[1918]: E1002 19:57:33.836170 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:34.836748 kubelet[1918]: E1002 19:57:34.836687 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:35.837128 kubelet[1918]: E1002 19:57:35.837071 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:36.838035 kubelet[1918]: E1002 19:57:36.837975 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:37.838441 kubelet[1918]: E1002 19:57:37.838383 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:37.976396 kubelet[1918]: E1002 19:57:37.976361 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:38.838868 kubelet[1918]: E1002 19:57:38.838811 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:39.839888 kubelet[1918]: E1002 19:57:39.839772 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:40.840657 kubelet[1918]: E1002 19:57:40.840596 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:41.841757 kubelet[1918]: E1002 19:57:41.841706 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:42.842789 kubelet[1918]: E1002 19:57:42.842733 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:42.977661 kubelet[1918]: E1002 19:57:42.977627 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:43.843865 kubelet[1918]: E1002 19:57:43.843801 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:44.844972 kubelet[1918]: E1002 19:57:44.844912 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:45.091082 env[1333]: time="2023-10-02T19:57:45.091040183Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:57:45.146327 env[1333]: time="2023-10-02T19:57:45.146048731Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\"" Oct 2 19:57:45.146860 env[1333]: time="2023-10-02T19:57:45.146827369Z" level=info msg="StartContainer for \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\"" Oct 2 19:57:45.169254 systemd[1]: Started cri-containerd-a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8.scope. Oct 2 19:57:45.179920 systemd[1]: cri-containerd-a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8.scope: Deactivated successfully. Oct 2 19:57:45.180140 systemd[1]: Stopped cri-containerd-a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8.scope. Oct 2 19:57:45.183505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8-rootfs.mount: Deactivated successfully. Oct 2 19:57:45.217293 env[1333]: time="2023-10-02T19:57:45.217238959Z" level=info msg="shim disconnected" id=a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8 Oct 2 19:57:45.217293 env[1333]: time="2023-10-02T19:57:45.217291961Z" level=warning msg="cleaning up after shim disconnected" id=a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8 namespace=k8s.io Oct 2 19:57:45.217629 env[1333]: time="2023-10-02T19:57:45.217303562Z" level=info msg="cleaning up dead shim" Oct 2 19:57:45.224691 env[1333]: time="2023-10-02T19:57:45.224650516Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:57:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2370 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:57:45Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:57:45.224948 env[1333]: time="2023-10-02T19:57:45.224892227Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:57:45.227059 env[1333]: time="2023-10-02T19:57:45.227013829Z" level=error msg="Failed to pipe stdout of container \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\"" error="reading from a closed fifo" Oct 2 19:57:45.227166 env[1333]: time="2023-10-02T19:57:45.227040031Z" level=error msg="Failed to pipe stderr of container \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\"" error="reading from a closed fifo" Oct 2 19:57:45.231482 env[1333]: time="2023-10-02T19:57:45.231438842Z" level=error msg="StartContainer for \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:57:45.231679 kubelet[1918]: E1002 19:57:45.231646 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8" Oct 2 19:57:45.231792 kubelet[1918]: E1002 19:57:45.231758 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:57:45.231792 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:57:45.231792 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 19:57:45.231792 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-98qck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:57:45.231992 kubelet[1918]: E1002 19:57:45.231809 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:57:45.239739 kubelet[1918]: I1002 19:57:45.239717 1918 scope.go:115] "RemoveContainer" containerID="29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb" Oct 2 19:57:45.240075 kubelet[1918]: I1002 19:57:45.240056 1918 scope.go:115] "RemoveContainer" containerID="29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb" Oct 2 19:57:45.241203 env[1333]: time="2023-10-02T19:57:45.241175911Z" level=info msg="RemoveContainer for \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\"" Oct 2 19:57:45.241596 env[1333]: time="2023-10-02T19:57:45.241568930Z" level=info msg="RemoveContainer for \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\"" Oct 2 19:57:45.241689 env[1333]: time="2023-10-02T19:57:45.241664735Z" level=error msg="RemoveContainer for \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\" failed" error="failed to set removing state for container \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\": container is already in removing state" Oct 2 19:57:45.241811 kubelet[1918]: E1002 19:57:45.241791 1918 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\": container is already in removing state" containerID="29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb" Oct 2 19:57:45.241892 kubelet[1918]: E1002 19:57:45.241823 1918 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb": container is already in removing state; Skipping pod "cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)" Oct 2 19:57:45.242070 kubelet[1918]: E1002 19:57:45.242054 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:57:45.251994 env[1333]: time="2023-10-02T19:57:45.251960130Z" level=info msg="RemoveContainer for \"29be7a2455109c72385ea9564cef1e3984a446d5e02e3d2abbce441ec5de11bb\" returns successfully" Oct 2 19:57:45.845461 kubelet[1918]: E1002 19:57:45.845405 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:46.846224 kubelet[1918]: E1002 19:57:46.846167 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.789239 kubelet[1918]: E1002 19:57:47.789188 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.846992 kubelet[1918]: E1002 19:57:47.846932 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.978178 kubelet[1918]: E1002 19:57:47.978144 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:48.321548 kubelet[1918]: W1002 19:57:48.321498 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod331946e9_4a0f_47a0_a839_0388e94dee69.slice/cri-containerd-a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8.scope WatchSource:0}: task a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8 not found: not found Oct 2 19:57:48.847308 kubelet[1918]: E1002 19:57:48.847252 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:49.848001 kubelet[1918]: E1002 19:57:49.847940 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:50.849152 kubelet[1918]: E1002 19:57:50.849094 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:51.849863 kubelet[1918]: E1002 19:57:51.849800 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:52.850961 kubelet[1918]: E1002 19:57:52.850906 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:52.979205 kubelet[1918]: E1002 19:57:52.979161 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:53.851405 kubelet[1918]: E1002 19:57:53.851344 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:54.851550 kubelet[1918]: E1002 19:57:54.851490 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:55.852333 kubelet[1918]: E1002 19:57:55.852270 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:56.852674 kubelet[1918]: E1002 19:57:56.852622 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:57.089803 kubelet[1918]: E1002 19:57:57.089758 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:57:57.853635 kubelet[1918]: E1002 19:57:57.853589 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:57.980600 kubelet[1918]: E1002 19:57:57.980566 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:58.854321 kubelet[1918]: E1002 19:57:58.854261 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:59.855241 kubelet[1918]: E1002 19:57:59.855182 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:00.856232 kubelet[1918]: E1002 19:58:00.856166 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:01.856728 kubelet[1918]: E1002 19:58:01.856624 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:02.857174 kubelet[1918]: E1002 19:58:02.857122 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:02.981361 kubelet[1918]: E1002 19:58:02.981318 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:03.858204 kubelet[1918]: E1002 19:58:03.858147 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:04.859055 kubelet[1918]: E1002 19:58:04.858999 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:05.859398 kubelet[1918]: E1002 19:58:05.859334 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:06.860332 kubelet[1918]: E1002 19:58:06.860266 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.788786 kubelet[1918]: E1002 19:58:07.788726 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.861498 kubelet[1918]: E1002 19:58:07.861437 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.982041 kubelet[1918]: E1002 19:58:07.982010 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:08.862223 kubelet[1918]: E1002 19:58:08.862165 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:09.863040 kubelet[1918]: E1002 19:58:09.862979 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:10.863508 kubelet[1918]: E1002 19:58:10.863449 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:11.863984 kubelet[1918]: E1002 19:58:11.863926 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:12.089591 kubelet[1918]: E1002 19:58:12.089555 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:58:12.864584 kubelet[1918]: E1002 19:58:12.864512 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:12.982942 kubelet[1918]: E1002 19:58:12.982906 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:13.865657 kubelet[1918]: E1002 19:58:13.865598 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:14.866252 kubelet[1918]: E1002 19:58:14.866191 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:15.866939 kubelet[1918]: E1002 19:58:15.866875 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:16.867806 kubelet[1918]: E1002 19:58:16.867713 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:17.868369 kubelet[1918]: E1002 19:58:17.868312 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:17.983784 kubelet[1918]: E1002 19:58:17.983752 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:18.869292 kubelet[1918]: E1002 19:58:18.869227 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:19.870136 kubelet[1918]: E1002 19:58:19.870035 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:20.871057 kubelet[1918]: E1002 19:58:20.870995 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:21.871700 kubelet[1918]: E1002 19:58:21.871621 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:22.872499 kubelet[1918]: E1002 19:58:22.872445 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:22.984868 kubelet[1918]: E1002 19:58:22.984832 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:23.872807 kubelet[1918]: E1002 19:58:23.872764 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:24.873243 kubelet[1918]: E1002 19:58:24.873181 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:25.873459 kubelet[1918]: E1002 19:58:25.873399 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:26.873697 kubelet[1918]: E1002 19:58:26.873639 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.091422 env[1333]: time="2023-10-02T19:58:27.091371430Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:58:27.115731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145628954.mount: Deactivated successfully. Oct 2 19:58:27.122277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495352456.mount: Deactivated successfully. Oct 2 19:58:27.138404 env[1333]: time="2023-10-02T19:58:27.138286856Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\"" Oct 2 19:58:27.139273 env[1333]: time="2023-10-02T19:58:27.139244353Z" level=info msg="StartContainer for \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\"" Oct 2 19:58:27.158783 systemd[1]: Started cri-containerd-842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3.scope. Oct 2 19:58:27.171317 systemd[1]: cri-containerd-842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3.scope: Deactivated successfully. Oct 2 19:58:27.203314 env[1333]: time="2023-10-02T19:58:27.203258216Z" level=info msg="shim disconnected" id=842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3 Oct 2 19:58:27.203314 env[1333]: time="2023-10-02T19:58:27.203310516Z" level=warning msg="cleaning up after shim disconnected" id=842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3 namespace=k8s.io Oct 2 19:58:27.203314 env[1333]: time="2023-10-02T19:58:27.203322016Z" level=info msg="cleaning up dead shim" Oct 2 19:58:27.210788 env[1333]: time="2023-10-02T19:58:27.210744588Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:58:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2407 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:58:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:58:27.211064 env[1333]: time="2023-10-02T19:58:27.211005987Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:58:27.211297 env[1333]: time="2023-10-02T19:58:27.211265086Z" level=error msg="Failed to pipe stderr of container \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\"" error="reading from a closed fifo" Oct 2 19:58:27.215645 env[1333]: time="2023-10-02T19:58:27.215587970Z" level=error msg="Failed to pipe stdout of container \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\"" error="reading from a closed fifo" Oct 2 19:58:27.219931 env[1333]: time="2023-10-02T19:58:27.219887954Z" level=error msg="StartContainer for \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:58:27.220174 kubelet[1918]: E1002 19:58:27.220154 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3" Oct 2 19:58:27.220299 kubelet[1918]: E1002 19:58:27.220276 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:58:27.220299 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:58:27.220299 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 19:58:27.220299 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-98qck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:58:27.220495 kubelet[1918]: E1002 19:58:27.220324 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:58:27.312062 kubelet[1918]: I1002 19:58:27.312032 1918 scope.go:115] "RemoveContainer" containerID="a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8" Oct 2 19:58:27.312476 kubelet[1918]: I1002 19:58:27.312430 1918 scope.go:115] "RemoveContainer" containerID="a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8" Oct 2 19:58:27.313676 env[1333]: time="2023-10-02T19:58:27.313635308Z" level=info msg="RemoveContainer for \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\"" Oct 2 19:58:27.314074 env[1333]: time="2023-10-02T19:58:27.314045806Z" level=info msg="RemoveContainer for \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\"" Oct 2 19:58:27.314358 env[1333]: time="2023-10-02T19:58:27.314312905Z" level=error msg="RemoveContainer for \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\" failed" error="failed to set removing state for container \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\": container is already in removing state" Oct 2 19:58:27.314481 kubelet[1918]: E1002 19:58:27.314459 1918 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\": container is already in removing state" containerID="a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8" Oct 2 19:58:27.314585 kubelet[1918]: E1002 19:58:27.314500 1918 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8": container is already in removing state; Skipping pod "cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)" Oct 2 19:58:27.314859 kubelet[1918]: E1002 19:58:27.314840 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:58:27.323994 env[1333]: time="2023-10-02T19:58:27.323961469Z" level=info msg="RemoveContainer for \"a5331783d53a119275ce4b3c0db5292bff29b32fafa5fa67167e2d081593f0b8\" returns successfully" Oct 2 19:58:27.788383 kubelet[1918]: E1002 19:58:27.788329 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.874757 kubelet[1918]: E1002 19:58:27.874695 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.986236 kubelet[1918]: E1002 19:58:27.986207 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:28.113143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3-rootfs.mount: Deactivated successfully. Oct 2 19:58:28.875406 kubelet[1918]: E1002 19:58:28.875345 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:29.875736 kubelet[1918]: E1002 19:58:29.875675 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:30.309496 kubelet[1918]: W1002 19:58:30.309451 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod331946e9_4a0f_47a0_a839_0388e94dee69.slice/cri-containerd-842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3.scope WatchSource:0}: task 842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3 not found: not found Oct 2 19:58:30.876847 kubelet[1918]: E1002 19:58:30.876784 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:31.877940 kubelet[1918]: E1002 19:58:31.877877 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:32.878280 kubelet[1918]: E1002 19:58:32.878220 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:32.987506 kubelet[1918]: E1002 19:58:32.987472 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:33.879253 kubelet[1918]: E1002 19:58:33.879195 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:34.880348 kubelet[1918]: E1002 19:58:34.880283 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:35.881233 kubelet[1918]: E1002 19:58:35.881170 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:36.882216 kubelet[1918]: E1002 19:58:36.882151 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:37.882821 kubelet[1918]: E1002 19:58:37.882763 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:37.988374 kubelet[1918]: E1002 19:58:37.988328 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:38.883025 kubelet[1918]: E1002 19:58:38.882962 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:39.883382 kubelet[1918]: E1002 19:58:39.883327 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:40.090113 kubelet[1918]: E1002 19:58:40.089567 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:58:40.884342 kubelet[1918]: E1002 19:58:40.884277 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:41.885326 kubelet[1918]: E1002 19:58:41.885245 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:42.886340 kubelet[1918]: E1002 19:58:42.886279 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:42.989605 kubelet[1918]: E1002 19:58:42.989567 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:43.887035 kubelet[1918]: E1002 19:58:43.886978 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:44.887590 kubelet[1918]: E1002 19:58:44.887516 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:45.888341 kubelet[1918]: E1002 19:58:45.888276 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:46.889165 kubelet[1918]: E1002 19:58:46.889099 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.788895 kubelet[1918]: E1002 19:58:47.788840 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.889463 kubelet[1918]: E1002 19:58:47.889406 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.991065 kubelet[1918]: E1002 19:58:47.991028 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:48.890269 kubelet[1918]: E1002 19:58:48.890202 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:49.890663 kubelet[1918]: E1002 19:58:49.890604 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:50.890986 kubelet[1918]: E1002 19:58:50.890926 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:51.891891 kubelet[1918]: E1002 19:58:51.891833 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:52.892668 kubelet[1918]: E1002 19:58:52.892606 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:52.991904 kubelet[1918]: E1002 19:58:52.991856 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:53.893743 kubelet[1918]: E1002 19:58:53.893690 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:54.894186 kubelet[1918]: E1002 19:58:54.894114 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:55.089393 kubelet[1918]: E1002 19:58:55.089344 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:58:55.895050 kubelet[1918]: E1002 19:58:55.894991 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:56.895649 kubelet[1918]: E1002 19:58:56.895586 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:57.896040 kubelet[1918]: E1002 19:58:57.895982 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:57.992926 kubelet[1918]: E1002 19:58:57.992887 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:58.896541 kubelet[1918]: E1002 19:58:58.896467 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:59.896959 kubelet[1918]: E1002 19:58:59.896902 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:00.897491 kubelet[1918]: E1002 19:59:00.897434 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:01.898333 kubelet[1918]: E1002 19:59:01.898244 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:02.898695 kubelet[1918]: E1002 19:59:02.898635 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:02.993626 kubelet[1918]: E1002 19:59:02.993591 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:03.898973 kubelet[1918]: E1002 19:59:03.898912 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:04.899253 kubelet[1918]: E1002 19:59:04.899194 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:05.899548 kubelet[1918]: E1002 19:59:05.899487 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:06.089998 kubelet[1918]: E1002 19:59:06.089758 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:59:06.899911 kubelet[1918]: E1002 19:59:06.899853 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.788821 kubelet[1918]: E1002 19:59:07.788763 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.900497 kubelet[1918]: E1002 19:59:07.900436 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.995064 kubelet[1918]: E1002 19:59:07.995034 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:08.901430 kubelet[1918]: E1002 19:59:08.901370 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:09.902120 kubelet[1918]: E1002 19:59:09.902002 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:10.903008 kubelet[1918]: E1002 19:59:10.902955 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:11.904107 kubelet[1918]: E1002 19:59:11.904048 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:12.904286 kubelet[1918]: E1002 19:59:12.904231 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:12.996677 kubelet[1918]: E1002 19:59:12.996631 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:13.904685 kubelet[1918]: E1002 19:59:13.904626 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:14.905369 kubelet[1918]: E1002 19:59:14.905310 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:15.906275 kubelet[1918]: E1002 19:59:15.906217 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:16.907326 kubelet[1918]: E1002 19:59:16.907269 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:17.908461 kubelet[1918]: E1002 19:59:17.908404 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:17.997952 kubelet[1918]: E1002 19:59:17.997919 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:18.089837 kubelet[1918]: E1002 19:59:18.089800 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:59:18.909257 kubelet[1918]: E1002 19:59:18.909193 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:19.909889 kubelet[1918]: E1002 19:59:19.909830 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:20.910637 kubelet[1918]: E1002 19:59:20.910580 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:21.911371 kubelet[1918]: E1002 19:59:21.911307 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:22.911635 kubelet[1918]: E1002 19:59:22.911581 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:22.998937 kubelet[1918]: E1002 19:59:22.998900 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:23.912459 kubelet[1918]: E1002 19:59:23.912403 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:24.913223 kubelet[1918]: E1002 19:59:24.913164 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:25.913764 kubelet[1918]: E1002 19:59:25.913708 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:26.915071 kubelet[1918]: E1002 19:59:26.914968 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:27.789148 kubelet[1918]: E1002 19:59:27.789092 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:27.915869 kubelet[1918]: E1002 19:59:27.915804 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:28.000164 kubelet[1918]: E1002 19:59:28.000131 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:28.916347 kubelet[1918]: E1002 19:59:28.916283 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:29.917360 kubelet[1918]: E1002 19:59:29.917297 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:30.089638 kubelet[1918]: E1002 19:59:30.089355 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:59:30.918193 kubelet[1918]: E1002 19:59:30.918130 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:31.919163 kubelet[1918]: E1002 19:59:31.919104 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:32.919733 kubelet[1918]: E1002 19:59:32.919673 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:33.001690 kubelet[1918]: E1002 19:59:33.001656 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:33.920127 kubelet[1918]: E1002 19:59:33.920063 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:34.921127 kubelet[1918]: E1002 19:59:34.921069 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:35.922281 kubelet[1918]: E1002 19:59:35.922185 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:36.922663 kubelet[1918]: E1002 19:59:36.922609 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:37.923746 kubelet[1918]: E1002 19:59:37.923687 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:38.002903 kubelet[1918]: E1002 19:59:38.002872 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:38.924017 kubelet[1918]: E1002 19:59:38.923952 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:39.925052 kubelet[1918]: E1002 19:59:39.924995 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:40.925356 kubelet[1918]: E1002 19:59:40.925292 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:41.925549 kubelet[1918]: E1002 19:59:41.925482 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:42.926340 kubelet[1918]: E1002 19:59:42.926285 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:43.003607 kubelet[1918]: E1002 19:59:43.003576 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:43.089237 kubelet[1918]: E1002 19:59:43.089192 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:59:43.927252 kubelet[1918]: E1002 19:59:43.927193 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:44.927677 kubelet[1918]: E1002 19:59:44.927616 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:45.928658 kubelet[1918]: E1002 19:59:45.928597 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:46.929285 kubelet[1918]: E1002 19:59:46.929225 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.789060 kubelet[1918]: E1002 19:59:47.789006 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.930361 kubelet[1918]: E1002 19:59:47.930272 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:48.004965 kubelet[1918]: E1002 19:59:48.004932 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:48.930731 kubelet[1918]: E1002 19:59:48.930674 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:49.931200 kubelet[1918]: E1002 19:59:49.931141 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:50.931820 kubelet[1918]: E1002 19:59:50.931756 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:51.932603 kubelet[1918]: E1002 19:59:51.932542 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:52.933044 kubelet[1918]: E1002 19:59:52.932984 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:53.006115 kubelet[1918]: E1002 19:59:53.006075 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:53.933762 kubelet[1918]: E1002 19:59:53.933700 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:54.934021 kubelet[1918]: E1002 19:59:54.933964 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:55.097257 env[1333]: time="2023-10-02T19:59:55.097204134Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:59:55.131859 env[1333]: time="2023-10-02T19:59:55.131813553Z" level=info msg="CreateContainer within sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998\"" Oct 2 19:59:55.132324 env[1333]: time="2023-10-02T19:59:55.132293464Z" level=info msg="StartContainer for \"613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998\"" Oct 2 19:59:55.156783 systemd[1]: Started cri-containerd-613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998.scope. Oct 2 19:59:55.158581 systemd[1]: run-containerd-runc-k8s.io-613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998-runc.yPFFkA.mount: Deactivated successfully. Oct 2 19:59:55.169971 systemd[1]: cri-containerd-613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998.scope: Deactivated successfully. Oct 2 19:59:55.187215 env[1333]: time="2023-10-02T19:59:55.186636749Z" level=info msg="shim disconnected" id=613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998 Oct 2 19:59:55.187215 env[1333]: time="2023-10-02T19:59:55.186693651Z" level=warning msg="cleaning up after shim disconnected" id=613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998 namespace=k8s.io Oct 2 19:59:55.187215 env[1333]: time="2023-10-02T19:59:55.186704751Z" level=info msg="cleaning up dead shim" Oct 2 19:59:55.194356 env[1333]: time="2023-10-02T19:59:55.194298131Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2458 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:55Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:55.194647 env[1333]: time="2023-10-02T19:59:55.194587537Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:59:55.197622 env[1333]: time="2023-10-02T19:59:55.197569708Z" level=error msg="Failed to pipe stdout of container \"613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998\"" error="reading from a closed fifo" Oct 2 19:59:55.201606 env[1333]: time="2023-10-02T19:59:55.201565702Z" level=error msg="Failed to pipe stderr of container \"613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998\"" error="reading from a closed fifo" Oct 2 19:59:55.205897 env[1333]: time="2023-10-02T19:59:55.205855904Z" level=error msg="StartContainer for \"613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:55.206082 kubelet[1918]: E1002 19:59:55.206058 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998" Oct 2 19:59:55.206206 kubelet[1918]: E1002 19:59:55.206189 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:55.206206 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:55.206206 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 19:59:55.206206 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-98qck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:55.206413 kubelet[1918]: E1002 19:59:55.206240 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:59:55.457117 kubelet[1918]: I1002 19:59:55.456993 1918 scope.go:115] "RemoveContainer" containerID="842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3" Oct 2 19:59:55.458130 kubelet[1918]: I1002 19:59:55.458091 1918 scope.go:115] "RemoveContainer" containerID="842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3" Oct 2 19:59:55.459636 env[1333]: time="2023-10-02T19:59:55.459597704Z" level=info msg="RemoveContainer for \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\"" Oct 2 19:59:55.460376 env[1333]: time="2023-10-02T19:59:55.460317922Z" level=info msg="RemoveContainer for \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\"" Oct 2 19:59:55.460490 env[1333]: time="2023-10-02T19:59:55.460433824Z" level=error msg="RemoveContainer for \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\" failed" error="failed to set removing state for container \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\": container is already in removing state" Oct 2 19:59:55.460637 kubelet[1918]: E1002 19:59:55.460613 1918 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\": container is already in removing state" containerID="842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3" Oct 2 19:59:55.460737 kubelet[1918]: E1002 19:59:55.460655 1918 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3": container is already in removing state; Skipping pod "cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)" Oct 2 19:59:55.461844 kubelet[1918]: E1002 19:59:55.461166 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 19:59:55.471394 env[1333]: time="2023-10-02T19:59:55.471355883Z" level=info msg="RemoveContainer for \"842b76bf0f60659dcefa1d4bc2d2ffed0bb178363b4dcfc8a5245ce6e90dc5a3\" returns successfully" Oct 2 19:59:55.935070 kubelet[1918]: E1002 19:59:55.935009 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:56.120796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998-rootfs.mount: Deactivated successfully. Oct 2 19:59:56.935957 kubelet[1918]: E1002 19:59:56.935897 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:57.936598 kubelet[1918]: E1002 19:59:57.936537 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:58.007256 kubelet[1918]: E1002 19:59:58.007224 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:58.292407 kubelet[1918]: W1002 19:59:58.292362 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod331946e9_4a0f_47a0_a839_0388e94dee69.slice/cri-containerd-613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998.scope WatchSource:0}: task 613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998 not found: not found Oct 2 19:59:58.937156 kubelet[1918]: E1002 19:59:58.937092 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:59.938198 kubelet[1918]: E1002 19:59:59.938141 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:00.939164 kubelet[1918]: E1002 20:00:00.939104 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:01.939628 kubelet[1918]: E1002 20:00:01.939569 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:02.939887 kubelet[1918]: E1002 20:00:02.939826 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:03.008310 kubelet[1918]: E1002 20:00:03.008273 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:03.940335 kubelet[1918]: E1002 20:00:03.940273 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:04.940925 kubelet[1918]: E1002 20:00:04.940868 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:05.941421 kubelet[1918]: E1002 20:00:05.941361 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:06.942160 kubelet[1918]: E1002 20:00:06.942100 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:07.788755 kubelet[1918]: E1002 20:00:07.788698 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:07.943046 kubelet[1918]: E1002 20:00:07.942985 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:08.009171 kubelet[1918]: E1002 20:00:08.009122 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:08.090268 kubelet[1918]: E1002 20:00:08.089867 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-jbmjb_kube-system(331946e9-4a0f-47a0-a839-0388e94dee69)\"" pod="kube-system/cilium-jbmjb" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 Oct 2 20:00:08.943413 kubelet[1918]: E1002 20:00:08.943347 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:09.944543 kubelet[1918]: E1002 20:00:09.944466 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:10.826914 env[1333]: time="2023-10-02T20:00:10.826860678Z" level=info msg="StopPodSandbox for \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\"" Oct 2 20:00:10.829770 env[1333]: time="2023-10-02T20:00:10.826952980Z" level=info msg="Container to stop \"613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:00:10.828758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b-shm.mount: Deactivated successfully. Oct 2 20:00:10.836000 audit: BPF prog-id=68 op=UNLOAD Oct 2 20:00:10.836967 systemd[1]: cri-containerd-254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b.scope: Deactivated successfully. Oct 2 20:00:10.841485 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 20:00:10.841580 kernel: audit: type=1334 audit(1696276810.836:666): prog-id=68 op=UNLOAD Oct 2 20:00:10.846000 audit: BPF prog-id=72 op=UNLOAD Oct 2 20:00:10.852671 kernel: audit: type=1334 audit(1696276810.846:667): prog-id=72 op=UNLOAD Oct 2 20:00:10.863077 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b-rootfs.mount: Deactivated successfully. Oct 2 20:00:10.899642 env[1333]: time="2023-10-02T20:00:10.899591337Z" level=info msg="shim disconnected" id=254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b Oct 2 20:00:10.899856 env[1333]: time="2023-10-02T20:00:10.899709641Z" level=warning msg="cleaning up after shim disconnected" id=254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b namespace=k8s.io Oct 2 20:00:10.899856 env[1333]: time="2023-10-02T20:00:10.899725441Z" level=info msg="cleaning up dead shim" Oct 2 20:00:10.907797 env[1333]: time="2023-10-02T20:00:10.907754346Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2490 runtime=io.containerd.runc.v2\n" Oct 2 20:00:10.908082 env[1333]: time="2023-10-02T20:00:10.908053054Z" level=info msg="TearDown network for sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" successfully" Oct 2 20:00:10.908172 env[1333]: time="2023-10-02T20:00:10.908079955Z" level=info msg="StopPodSandbox for \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" returns successfully" Oct 2 20:00:10.945323 kubelet[1918]: E1002 20:00:10.945282 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:10.953489 kubelet[1918]: I1002 20:00:10.953452 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-hostproc\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.953632 kubelet[1918]: I1002 20:00:10.953511 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-config-path\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.953632 kubelet[1918]: I1002 20:00:10.953552 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-etc-cni-netd\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.953632 kubelet[1918]: I1002 20:00:10.953581 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-bpf-maps\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.953632 kubelet[1918]: I1002 20:00:10.953617 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/331946e9-4a0f-47a0-a839-0388e94dee69-clustermesh-secrets\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.953872 kubelet[1918]: I1002 20:00:10.953647 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-host-proc-sys-net\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.953872 kubelet[1918]: I1002 20:00:10.953679 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cni-path\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.953872 kubelet[1918]: I1002 20:00:10.953713 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-host-proc-sys-kernel\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.953872 kubelet[1918]: I1002 20:00:10.953750 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/331946e9-4a0f-47a0-a839-0388e94dee69-hubble-tls\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.953872 kubelet[1918]: I1002 20:00:10.953780 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-lib-modules\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.953872 kubelet[1918]: I1002 20:00:10.953817 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-cgroup\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.954184 kubelet[1918]: I1002 20:00:10.953850 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-run\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.954184 kubelet[1918]: I1002 20:00:10.953887 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98qck\" (UniqueName: \"kubernetes.io/projected/331946e9-4a0f-47a0-a839-0388e94dee69-kube-api-access-98qck\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.954184 kubelet[1918]: I1002 20:00:10.953921 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-xtables-lock\") pod \"331946e9-4a0f-47a0-a839-0388e94dee69\" (UID: \"331946e9-4a0f-47a0-a839-0388e94dee69\") " Oct 2 20:00:10.954184 kubelet[1918]: I1002 20:00:10.953982 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:10.954184 kubelet[1918]: I1002 20:00:10.954030 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-hostproc" (OuterVolumeSpecName: "hostproc") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:10.954464 kubelet[1918]: W1002 20:00:10.954259 1918 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/331946e9-4a0f-47a0-a839-0388e94dee69/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:00:10.957260 kubelet[1918]: I1002 20:00:10.954581 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:10.957260 kubelet[1918]: I1002 20:00:10.954635 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:10.957260 kubelet[1918]: I1002 20:00:10.954664 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:10.957260 kubelet[1918]: I1002 20:00:10.955332 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:10.957260 kubelet[1918]: I1002 20:00:10.955382 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cni-path" (OuterVolumeSpecName: "cni-path") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:10.957554 kubelet[1918]: I1002 20:00:10.955418 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:10.957554 kubelet[1918]: I1002 20:00:10.956920 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:00:10.957554 kubelet[1918]: I1002 20:00:10.956980 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:10.957554 kubelet[1918]: I1002 20:00:10.957206 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:10.960712 kubelet[1918]: I1002 20:00:10.960690 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331946e9-4a0f-47a0-a839-0388e94dee69-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:10.961722 systemd[1]: var-lib-kubelet-pods-331946e9\x2d4a0f\x2d47a0\x2da839\x2d0388e94dee69-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:00:10.961855 systemd[1]: var-lib-kubelet-pods-331946e9\x2d4a0f\x2d47a0\x2da839\x2d0388e94dee69-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:00:10.965723 kubelet[1918]: I1002 20:00:10.965286 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331946e9-4a0f-47a0-a839-0388e94dee69-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:00:10.965317 systemd[1]: var-lib-kubelet-pods-331946e9\x2d4a0f\x2d47a0\x2da839\x2d0388e94dee69-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d98qck.mount: Deactivated successfully. Oct 2 20:00:10.966237 kubelet[1918]: I1002 20:00:10.966209 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331946e9-4a0f-47a0-a839-0388e94dee69-kube-api-access-98qck" (OuterVolumeSpecName: "kube-api-access-98qck") pod "331946e9-4a0f-47a0-a839-0388e94dee69" (UID: "331946e9-4a0f-47a0-a839-0388e94dee69"). InnerVolumeSpecName "kube-api-access-98qck". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:11.054462 kubelet[1918]: I1002 20:00:11.054410 1918 reconciler.go:399] "Volume detached for volume \"kube-api-access-98qck\" (UniqueName: \"kubernetes.io/projected/331946e9-4a0f-47a0-a839-0388e94dee69-kube-api-access-98qck\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.054462 kubelet[1918]: I1002 20:00:11.054470 1918 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-xtables-lock\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.054757 kubelet[1918]: I1002 20:00:11.054487 1918 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-run\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.054757 kubelet[1918]: I1002 20:00:11.054503 1918 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/331946e9-4a0f-47a0-a839-0388e94dee69-clustermesh-secrets\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.054757 kubelet[1918]: I1002 20:00:11.054521 1918 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-host-proc-sys-net\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.054757 kubelet[1918]: I1002 20:00:11.054553 1918 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-hostproc\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.054757 kubelet[1918]: I1002 20:00:11.054567 1918 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-config-path\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.054757 kubelet[1918]: I1002 20:00:11.054582 1918 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-etc-cni-netd\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.054757 kubelet[1918]: I1002 20:00:11.054597 1918 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-bpf-maps\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.054757 kubelet[1918]: I1002 20:00:11.054611 1918 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/331946e9-4a0f-47a0-a839-0388e94dee69-hubble-tls\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.054757 kubelet[1918]: I1002 20:00:11.054625 1918 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-lib-modules\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.055046 kubelet[1918]: I1002 20:00:11.054640 1918 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cni-path\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.055046 kubelet[1918]: I1002 20:00:11.054656 1918 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-host-proc-sys-kernel\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.055046 kubelet[1918]: I1002 20:00:11.054671 1918 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/331946e9-4a0f-47a0-a839-0388e94dee69-cilium-cgroup\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:11.484794 kubelet[1918]: I1002 20:00:11.484761 1918 scope.go:115] "RemoveContainer" containerID="613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998" Oct 2 20:00:11.486748 env[1333]: time="2023-10-02T20:00:11.486677601Z" level=info msg="RemoveContainer for \"613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998\"" Oct 2 20:00:11.489723 systemd[1]: Removed slice kubepods-burstable-pod331946e9_4a0f_47a0_a839_0388e94dee69.slice. Oct 2 20:00:11.508653 env[1333]: time="2023-10-02T20:00:11.508612764Z" level=info msg="RemoveContainer for \"613b50d58e233866483fe439ed9b6465f7611d3b036c9c9c9dc97734af20e998\" returns successfully" Oct 2 20:00:11.514291 kubelet[1918]: I1002 20:00:11.514262 1918 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:00:11.514425 kubelet[1918]: E1002 20:00:11.514312 1918 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514425 kubelet[1918]: E1002 20:00:11.514323 1918 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514425 kubelet[1918]: E1002 20:00:11.514332 1918 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514425 kubelet[1918]: E1002 20:00:11.514341 1918 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514425 kubelet[1918]: I1002 20:00:11.514362 1918 memory_manager.go:345] "RemoveStaleState removing state" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514425 kubelet[1918]: I1002 20:00:11.514370 1918 memory_manager.go:345] "RemoveStaleState removing state" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514425 kubelet[1918]: I1002 20:00:11.514378 1918 memory_manager.go:345] "RemoveStaleState removing state" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514425 kubelet[1918]: I1002 20:00:11.514386 1918 memory_manager.go:345] "RemoveStaleState removing state" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514425 kubelet[1918]: E1002 20:00:11.514406 1918 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514425 kubelet[1918]: I1002 20:00:11.514421 1918 memory_manager.go:345] "RemoveStaleState removing state" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514425 kubelet[1918]: I1002 20:00:11.514430 1918 memory_manager.go:345] "RemoveStaleState removing state" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.514872 kubelet[1918]: E1002 20:00:11.514448 1918 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="331946e9-4a0f-47a0-a839-0388e94dee69" containerName="mount-cgroup" Oct 2 20:00:11.519064 systemd[1]: Created slice kubepods-burstable-pod53992ada_0ff0_4675_b217_5fb4552ed75d.slice. Oct 2 20:00:11.557897 kubelet[1918]: I1002 20:00:11.557838 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbkjq\" (UniqueName: \"kubernetes.io/projected/53992ada-0ff0-4675-b217-5fb4552ed75d-kube-api-access-tbkjq\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558208 kubelet[1918]: I1002 20:00:11.557966 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-cgroup\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558208 kubelet[1918]: I1002 20:00:11.558060 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-lib-modules\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558208 kubelet[1918]: I1002 20:00:11.558148 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-host-proc-sys-net\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558495 kubelet[1918]: I1002 20:00:11.558231 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cni-path\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558495 kubelet[1918]: I1002 20:00:11.558327 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-etc-cni-netd\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558495 kubelet[1918]: I1002 20:00:11.558403 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-config-path\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558495 kubelet[1918]: I1002 20:00:11.558439 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-run\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558495 kubelet[1918]: I1002 20:00:11.558475 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-bpf-maps\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558818 kubelet[1918]: I1002 20:00:11.558510 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-hostproc\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558818 kubelet[1918]: I1002 20:00:11.558566 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-host-proc-sys-kernel\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558818 kubelet[1918]: I1002 20:00:11.558603 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53992ada-0ff0-4675-b217-5fb4552ed75d-hubble-tls\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558818 kubelet[1918]: I1002 20:00:11.558638 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-xtables-lock\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.558818 kubelet[1918]: I1002 20:00:11.558676 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53992ada-0ff0-4675-b217-5fb4552ed75d-clustermesh-secrets\") pod \"cilium-wd77d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " pod="kube-system/cilium-wd77d" Oct 2 20:00:11.833482 env[1333]: time="2023-10-02T20:00:11.833436506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wd77d,Uid:53992ada-0ff0-4675-b217-5fb4552ed75d,Namespace:kube-system,Attempt:0,}" Oct 2 20:00:11.870745 env[1333]: time="2023-10-02T20:00:11.870650261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:00:11.870745 env[1333]: time="2023-10-02T20:00:11.870717663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:00:11.870953 env[1333]: time="2023-10-02T20:00:11.870731964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:00:11.871334 env[1333]: time="2023-10-02T20:00:11.871290378Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c pid=2518 runtime=io.containerd.runc.v2 Oct 2 20:00:11.892271 systemd[1]: Started cri-containerd-d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c.scope. Oct 2 20:00:11.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.925549 kernel: audit: type=1400 audit(1696276811.900:668): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.925641 kernel: audit: type=1400 audit(1696276811.900:669): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.925685 kernel: audit: type=1400 audit(1696276811.900:670): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951191 kernel: audit: type=1400 audit(1696276811.900:671): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951312 kubelet[1918]: E1002 20:00:11.951286 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:11.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.963314 kernel: audit: type=1400 audit(1696276811.900:672): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.975693 kernel: audit: type=1400 audit(1696276811.900:673): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.975761 kernel: audit: type=1400 audit(1696276811.900:674): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.999095 kernel: audit: type=1400 audit(1696276811.900:675): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.925000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.925000 audit: BPF prog-id=79 op=LOAD Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2518 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:11.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434303236643737333166386539333732633031623131333830306131 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2518 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:11.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434303236643737333166386539333732633031623131333830306131 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.931000 audit: BPF prog-id=80 op=LOAD Oct 2 20:00:11.931000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c00037e9f0 items=0 ppid=2518 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:11.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434303236643737333166386539333732633031623131333830306131 Oct 2 20:00:11.938000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.938000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.938000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.938000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.938000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.938000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.938000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.938000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.938000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.938000 audit: BPF prog-id=81 op=LOAD Oct 2 20:00:11.938000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c00037ea38 items=0 ppid=2518 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:11.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434303236643737333166386539333732633031623131333830306131 Oct 2 20:00:11.951000 audit: BPF prog-id=81 op=UNLOAD Oct 2 20:00:11.951000 audit: BPF prog-id=80 op=UNLOAD Oct 2 20:00:11.951000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951000 audit[2528]: AVC avc: denied { perfmon } for pid=2528 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951000 audit[2528]: AVC avc: denied { bpf } for pid=2528 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:11.951000 audit: BPF prog-id=82 op=LOAD Oct 2 20:00:11.951000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c00037ee48 items=0 ppid=2518 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:11.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434303236643737333166386539333732633031623131333830306131 Oct 2 20:00:12.011505 env[1333]: time="2023-10-02T20:00:12.011455679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wd77d,Uid:53992ada-0ff0-4675-b217-5fb4552ed75d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\"" Oct 2 20:00:12.013953 env[1333]: time="2023-10-02T20:00:12.013923042Z" level=info msg="CreateContainer within sandbox \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:00:12.048267 env[1333]: time="2023-10-02T20:00:12.048223027Z" level=info msg="CreateContainer within sandbox \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088\"" Oct 2 20:00:12.048915 env[1333]: time="2023-10-02T20:00:12.048851743Z" level=info msg="StartContainer for \"3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088\"" Oct 2 20:00:12.064697 systemd[1]: Started cri-containerd-3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088.scope. Oct 2 20:00:12.076156 systemd[1]: cri-containerd-3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088.scope: Deactivated successfully. Oct 2 20:00:12.095718 kubelet[1918]: I1002 20:00:12.093933 1918 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=331946e9-4a0f-47a0-a839-0388e94dee69 path="/var/lib/kubelet/pods/331946e9-4a0f-47a0-a839-0388e94dee69/volumes" Oct 2 20:00:12.104796 env[1333]: time="2023-10-02T20:00:12.104742084Z" level=info msg="shim disconnected" id=3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088 Oct 2 20:00:12.104967 env[1333]: time="2023-10-02T20:00:12.104797186Z" level=warning msg="cleaning up after shim disconnected" id=3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088 namespace=k8s.io Oct 2 20:00:12.104967 env[1333]: time="2023-10-02T20:00:12.104808786Z" level=info msg="cleaning up dead shim" Oct 2 20:00:12.112123 env[1333]: time="2023-10-02T20:00:12.112073574Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2580 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:00:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:00:12.112393 env[1333]: time="2023-10-02T20:00:12.112331480Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Oct 2 20:00:12.115607 env[1333]: time="2023-10-02T20:00:12.115566064Z" level=error msg="Failed to pipe stdout of container \"3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088\"" error="reading from a closed fifo" Oct 2 20:00:12.116628 env[1333]: time="2023-10-02T20:00:12.116587690Z" level=error msg="Failed to pipe stderr of container \"3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088\"" error="reading from a closed fifo" Oct 2 20:00:12.122248 env[1333]: time="2023-10-02T20:00:12.122208135Z" level=error msg="StartContainer for \"3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:00:12.122443 kubelet[1918]: E1002 20:00:12.122421 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088" Oct 2 20:00:12.122788 kubelet[1918]: E1002 20:00:12.122566 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:00:12.122788 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:00:12.122788 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 20:00:12.122788 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tbkjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-wd77d_kube-system(53992ada-0ff0-4675-b217-5fb4552ed75d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:00:12.122994 kubelet[1918]: E1002 20:00:12.122616 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wd77d" podUID=53992ada-0ff0-4675-b217-5fb4552ed75d Oct 2 20:00:12.489198 env[1333]: time="2023-10-02T20:00:12.489066997Z" level=info msg="StopPodSandbox for \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\"" Oct 2 20:00:12.489468 env[1333]: time="2023-10-02T20:00:12.489426006Z" level=info msg="Container to stop \"3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:00:12.497174 systemd[1]: cri-containerd-d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c.scope: Deactivated successfully. Oct 2 20:00:12.496000 audit: BPF prog-id=79 op=UNLOAD Oct 2 20:00:12.500000 audit: BPF prog-id=82 op=UNLOAD Oct 2 20:00:12.530680 env[1333]: time="2023-10-02T20:00:12.530633669Z" level=info msg="shim disconnected" id=d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c Oct 2 20:00:12.530913 env[1333]: time="2023-10-02T20:00:12.530890975Z" level=warning msg="cleaning up after shim disconnected" id=d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c namespace=k8s.io Oct 2 20:00:12.530995 env[1333]: time="2023-10-02T20:00:12.530980678Z" level=info msg="cleaning up dead shim" Oct 2 20:00:12.538702 env[1333]: time="2023-10-02T20:00:12.538671576Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2612 runtime=io.containerd.runc.v2\n" Oct 2 20:00:12.538978 env[1333]: time="2023-10-02T20:00:12.538949083Z" level=info msg="TearDown network for sandbox \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\" successfully" Oct 2 20:00:12.539065 env[1333]: time="2023-10-02T20:00:12.538978984Z" level=info msg="StopPodSandbox for \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\" returns successfully" Oct 2 20:00:12.567390 kubelet[1918]: I1002 20:00:12.567307 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-lib-modules\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567390 kubelet[1918]: I1002 20:00:12.567345 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:12.567390 kubelet[1918]: I1002 20:00:12.567362 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-host-proc-sys-net\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567675 kubelet[1918]: I1002 20:00:12.567404 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cni-path\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567675 kubelet[1918]: I1002 20:00:12.567444 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-config-path\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567675 kubelet[1918]: I1002 20:00:12.567469 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-xtables-lock\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567675 kubelet[1918]: I1002 20:00:12.567497 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53992ada-0ff0-4675-b217-5fb4552ed75d-clustermesh-secrets\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567675 kubelet[1918]: I1002 20:00:12.567541 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbkjq\" (UniqueName: \"kubernetes.io/projected/53992ada-0ff0-4675-b217-5fb4552ed75d-kube-api-access-tbkjq\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567675 kubelet[1918]: I1002 20:00:12.567567 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-run\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567937 kubelet[1918]: I1002 20:00:12.567591 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-cgroup\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567937 kubelet[1918]: I1002 20:00:12.567616 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-etc-cni-netd\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567937 kubelet[1918]: I1002 20:00:12.567643 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-bpf-maps\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567937 kubelet[1918]: I1002 20:00:12.567670 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-hostproc\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567937 kubelet[1918]: I1002 20:00:12.567696 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53992ada-0ff0-4675-b217-5fb4552ed75d-hubble-tls\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567937 kubelet[1918]: I1002 20:00:12.567731 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-host-proc-sys-kernel\") pod \"53992ada-0ff0-4675-b217-5fb4552ed75d\" (UID: \"53992ada-0ff0-4675-b217-5fb4552ed75d\") " Oct 2 20:00:12.567937 kubelet[1918]: I1002 20:00:12.567787 1918 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-lib-modules\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.568287 kubelet[1918]: I1002 20:00:12.567816 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:12.568287 kubelet[1918]: I1002 20:00:12.567844 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:12.568287 kubelet[1918]: I1002 20:00:12.567864 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cni-path" (OuterVolumeSpecName: "cni-path") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:12.568287 kubelet[1918]: W1002 20:00:12.568039 1918 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/53992ada-0ff0-4675-b217-5fb4552ed75d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:00:12.569602 kubelet[1918]: I1002 20:00:12.569572 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:12.569733 kubelet[1918]: I1002 20:00:12.569613 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:12.569864 kubelet[1918]: I1002 20:00:12.569839 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:12.569952 kubelet[1918]: I1002 20:00:12.569875 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:12.569952 kubelet[1918]: I1002 20:00:12.569901 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-hostproc" (OuterVolumeSpecName: "hostproc") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:12.571887 kubelet[1918]: I1002 20:00:12.570453 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:12.571887 kubelet[1918]: I1002 20:00:12.571639 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:00:12.575125 kubelet[1918]: I1002 20:00:12.575094 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53992ada-0ff0-4675-b217-5fb4552ed75d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:12.576281 kubelet[1918]: I1002 20:00:12.576250 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53992ada-0ff0-4675-b217-5fb4552ed75d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:00:12.577994 kubelet[1918]: I1002 20:00:12.577950 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53992ada-0ff0-4675-b217-5fb4552ed75d-kube-api-access-tbkjq" (OuterVolumeSpecName: "kube-api-access-tbkjq") pod "53992ada-0ff0-4675-b217-5fb4552ed75d" (UID: "53992ada-0ff0-4675-b217-5fb4552ed75d"). InnerVolumeSpecName "kube-api-access-tbkjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:12.668625 kubelet[1918]: I1002 20:00:12.668568 1918 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-cgroup\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.668625 kubelet[1918]: I1002 20:00:12.668612 1918 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-etc-cni-netd\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.668625 kubelet[1918]: I1002 20:00:12.668632 1918 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-bpf-maps\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.668925 kubelet[1918]: I1002 20:00:12.668646 1918 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-hostproc\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.668925 kubelet[1918]: I1002 20:00:12.668661 1918 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53992ada-0ff0-4675-b217-5fb4552ed75d-hubble-tls\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.668925 kubelet[1918]: I1002 20:00:12.668677 1918 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-host-proc-sys-kernel\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.668925 kubelet[1918]: I1002 20:00:12.668692 1918 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-host-proc-sys-net\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.668925 kubelet[1918]: I1002 20:00:12.668705 1918 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cni-path\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.668925 kubelet[1918]: I1002 20:00:12.668719 1918 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-config-path\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.668925 kubelet[1918]: I1002 20:00:12.668734 1918 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53992ada-0ff0-4675-b217-5fb4552ed75d-clustermesh-secrets\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.668925 kubelet[1918]: I1002 20:00:12.668748 1918 reconciler.go:399] "Volume detached for volume \"kube-api-access-tbkjq\" (UniqueName: \"kubernetes.io/projected/53992ada-0ff0-4675-b217-5fb4552ed75d-kube-api-access-tbkjq\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.669187 kubelet[1918]: I1002 20:00:12.668764 1918 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-cilium-run\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.669187 kubelet[1918]: I1002 20:00:12.668780 1918 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53992ada-0ff0-4675-b217-5fb4552ed75d-xtables-lock\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:00:12.829350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c-rootfs.mount: Deactivated successfully. Oct 2 20:00:12.829504 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c-shm.mount: Deactivated successfully. Oct 2 20:00:12.829622 systemd[1]: var-lib-kubelet-pods-53992ada\x2d0ff0\x2d4675\x2db217\x2d5fb4552ed75d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtbkjq.mount: Deactivated successfully. Oct 2 20:00:12.829729 systemd[1]: var-lib-kubelet-pods-53992ada\x2d0ff0\x2d4675\x2db217\x2d5fb4552ed75d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:00:12.829830 systemd[1]: var-lib-kubelet-pods-53992ada\x2d0ff0\x2d4675\x2db217\x2d5fb4552ed75d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:00:12.951806 kubelet[1918]: E1002 20:00:12.951761 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:13.009896 kubelet[1918]: E1002 20:00:13.009862 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:13.491663 kubelet[1918]: I1002 20:00:13.491631 1918 scope.go:115] "RemoveContainer" containerID="3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088" Oct 2 20:00:13.493896 env[1333]: time="2023-10-02T20:00:13.493852664Z" level=info msg="RemoveContainer for \"3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088\"" Oct 2 20:00:13.497314 systemd[1]: Removed slice kubepods-burstable-pod53992ada_0ff0_4675_b217_5fb4552ed75d.slice. Oct 2 20:00:13.507121 env[1333]: time="2023-10-02T20:00:13.507086706Z" level=info msg="RemoveContainer for \"3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088\" returns successfully" Oct 2 20:00:13.952721 kubelet[1918]: E1002 20:00:13.952655 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:14.091846 kubelet[1918]: I1002 20:00:14.091812 1918 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=53992ada-0ff0-4675-b217-5fb4552ed75d path="/var/lib/kubelet/pods/53992ada-0ff0-4675-b217-5fb4552ed75d/volumes" Oct 2 20:00:14.953064 kubelet[1918]: E1002 20:00:14.953004 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:15.209315 kubelet[1918]: W1002 20:00:15.209187 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53992ada_0ff0_4675_b217_5fb4552ed75d.slice/cri-containerd-3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088.scope WatchSource:0}: container "3eef16650b7d52c8dacecd9badd9274a22084ed1817319f5e1c97775613d9088" in namespace "k8s.io": not found Oct 2 20:00:15.953804 kubelet[1918]: E1002 20:00:15.953742 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:16.127901 kubelet[1918]: I1002 20:00:16.127849 1918 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:00:16.127901 kubelet[1918]: E1002 20:00:16.127909 1918 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="53992ada-0ff0-4675-b217-5fb4552ed75d" containerName="mount-cgroup" Oct 2 20:00:16.128175 kubelet[1918]: I1002 20:00:16.127937 1918 memory_manager.go:345] "RemoveStaleState removing state" podUID="53992ada-0ff0-4675-b217-5fb4552ed75d" containerName="mount-cgroup" Oct 2 20:00:16.131257 kubelet[1918]: I1002 20:00:16.131227 1918 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:00:16.134506 systemd[1]: Created slice kubepods-burstable-podac9e8ee3_388e_4b01_99a8_7b990e1a07c0.slice. Oct 2 20:00:16.150871 systemd[1]: Created slice kubepods-besteffort-podb677cfbb_8bde_41d4_8033_6e40c653cf1c.slice. Oct 2 20:00:16.190415 kubelet[1918]: I1002 20:00:16.190373 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-cgroup\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.190637 kubelet[1918]: I1002 20:00:16.190520 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cni-path\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.190637 kubelet[1918]: I1002 20:00:16.190595 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-xtables-lock\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.190787 kubelet[1918]: I1002 20:00:16.190674 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-config-path\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.190787 kubelet[1918]: I1002 20:00:16.190755 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-ipsec-secrets\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.190911 kubelet[1918]: I1002 20:00:16.190837 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmmkf\" (UniqueName: \"kubernetes.io/projected/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-kube-api-access-gmmkf\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.190975 kubelet[1918]: I1002 20:00:16.190919 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b677cfbb-8bde-41d4-8033-6e40c653cf1c-cilium-config-path\") pod \"cilium-operator-69b677f97c-p7vs6\" (UID: \"b677cfbb-8bde-41d4-8033-6e40c653cf1c\") " pod="kube-system/cilium-operator-69b677f97c-p7vs6" Oct 2 20:00:16.191036 kubelet[1918]: I1002 20:00:16.190999 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg64d\" (UniqueName: \"kubernetes.io/projected/b677cfbb-8bde-41d4-8033-6e40c653cf1c-kube-api-access-kg64d\") pod \"cilium-operator-69b677f97c-p7vs6\" (UID: \"b677cfbb-8bde-41d4-8033-6e40c653cf1c\") " pod="kube-system/cilium-operator-69b677f97c-p7vs6" Oct 2 20:00:16.191099 kubelet[1918]: I1002 20:00:16.191080 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-hubble-tls\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.191180 kubelet[1918]: I1002 20:00:16.191165 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-etc-cni-netd\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.191279 kubelet[1918]: I1002 20:00:16.191255 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-host-proc-sys-kernel\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.191348 kubelet[1918]: I1002 20:00:16.191322 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-host-proc-sys-net\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.191410 kubelet[1918]: I1002 20:00:16.191377 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-hostproc\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.191470 kubelet[1918]: I1002 20:00:16.191417 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-lib-modules\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.191547 kubelet[1918]: I1002 20:00:16.191471 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-clustermesh-secrets\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.191897 kubelet[1918]: I1002 20:00:16.191790 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-run\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.192082 kubelet[1918]: I1002 20:00:16.192060 1918 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-bpf-maps\") pod \"cilium-dlm7z\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " pod="kube-system/cilium-dlm7z" Oct 2 20:00:16.450342 env[1333]: time="2023-10-02T20:00:16.450296891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dlm7z,Uid:ac9e8ee3-388e-4b01-99a8-7b990e1a07c0,Namespace:kube-system,Attempt:0,}" Oct 2 20:00:16.453446 env[1333]: time="2023-10-02T20:00:16.453409472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-p7vs6,Uid:b677cfbb-8bde-41d4-8033-6e40c653cf1c,Namespace:kube-system,Attempt:0,}" Oct 2 20:00:16.516615 env[1333]: time="2023-10-02T20:00:16.514117164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:00:16.516615 env[1333]: time="2023-10-02T20:00:16.514150565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:00:16.516615 env[1333]: time="2023-10-02T20:00:16.514160065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:00:16.516615 env[1333]: time="2023-10-02T20:00:16.514317169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b pid=2661 runtime=io.containerd.runc.v2 Oct 2 20:00:16.516615 env[1333]: time="2023-10-02T20:00:16.509609846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:00:16.516615 env[1333]: time="2023-10-02T20:00:16.509653747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:00:16.516615 env[1333]: time="2023-10-02T20:00:16.509690548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:00:16.516615 env[1333]: time="2023-10-02T20:00:16.509905454Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d pid=2644 runtime=io.containerd.runc.v2 Oct 2 20:00:16.533249 systemd[1]: Started cri-containerd-8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d.scope. Oct 2 20:00:16.545797 systemd[1]: Started cri-containerd-4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b.scope. Oct 2 20:00:16.563551 kernel: kauditd_printk_skb: 51 callbacks suppressed Oct 2 20:00:16.563655 kernel: audit: type=1400 audit(1696276816.556:688): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586825 kernel: audit: type=1400 audit(1696276816.556:689): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.605222 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 20:00:16.605317 kernel: audit: type=1400 audit(1696276816.556:690): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.605341 kernel: audit: audit_lost=16 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 20:00:16.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.621618 kernel: audit: type=1400 audit(1696276816.556:691): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.621715 kernel: audit: backlog limit exceeded Oct 2 20:00:16.621745 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 20:00:16.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.628895 kernel: audit: type=1400 audit(1696276816.556:692): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.628974 kernel: audit: audit_lost=17 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 20:00:16.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.556000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.556000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.557000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.557000 audit: BPF prog-id=83 op=LOAD Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00011fc48 a2=10 a3=1c items=0 ppid=2644 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:16.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865343536363466306163376265313165356462323733666264393634 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00011f6b0 a2=3c a3=c items=0 ppid=2644 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:16.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865343536363466306163376265313165356462323733666264393634 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.563000 audit: BPF prog-id=84 op=LOAD Oct 2 20:00:16.563000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00011f9d8 a2=78 a3=c000216340 items=0 ppid=2644 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:16.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865343536363466306163376265313165356462323733666264393634 Oct 2 20:00:16.574000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.574000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.574000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.574000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.574000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.574000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.574000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.574000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.574000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.574000 audit: BPF prog-id=85 op=LOAD Oct 2 20:00:16.574000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00011f770 a2=78 a3=c000216388 items=0 ppid=2644 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:16.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865343536363466306163376265313165356462323733666264393634 Oct 2 20:00:16.586000 audit: BPF prog-id=85 op=UNLOAD Oct 2 20:00:16.586000 audit: BPF prog-id=84 op=UNLOAD Oct 2 20:00:16.586000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586000 audit[2673]: AVC avc: denied { perfmon } for pid=2673 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586000 audit[2673]: AVC avc: denied { bpf } for pid=2673 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.586000 audit: BPF prog-id=86 op=LOAD Oct 2 20:00:16.586000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00011fc30 a2=78 a3=c000216798 items=0 ppid=2644 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:16.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865343536363466306163376265313165356462323733666264393634 Oct 2 20:00:16.587000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.587000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.587000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.587000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.587000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.587000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.609000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.644000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.644000 audit: BPF prog-id=87 op=LOAD Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2661 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:16.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463336339666266386137383236303835363163653631303162656165 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2661 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:16.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463336339666266386137383236303835363163653631303162656165 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit: BPF prog-id=88 op=LOAD Oct 2 20:00:16.645000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001459d8 a2=78 a3=c0001c61f0 items=0 ppid=2661 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:16.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463336339666266386137383236303835363163653631303162656165 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit: BPF prog-id=89 op=LOAD Oct 2 20:00:16.645000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000145770 a2=78 a3=c0001c6238 items=0 ppid=2661 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:16.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463336339666266386137383236303835363163653631303162656165 Oct 2 20:00:16.645000 audit: BPF prog-id=89 op=UNLOAD Oct 2 20:00:16.645000 audit: BPF prog-id=88 op=UNLOAD Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { perfmon } for pid=2674 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit[2674]: AVC avc: denied { bpf } for pid=2674 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:16.645000 audit: BPF prog-id=90 op=LOAD Oct 2 20:00:16.645000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000145c30 a2=78 a3=c0001c6648 items=0 ppid=2661 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:16.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463336339666266386137383236303835363163653631303162656165 Oct 2 20:00:16.663609 env[1333]: time="2023-10-02T20:00:16.663570282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dlm7z,Uid:ac9e8ee3-388e-4b01-99a8-7b990e1a07c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\"" Oct 2 20:00:16.666682 env[1333]: time="2023-10-02T20:00:16.666638663Z" level=info msg="CreateContainer within sandbox \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:00:16.682013 env[1333]: time="2023-10-02T20:00:16.681969164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-p7vs6,Uid:b677cfbb-8bde-41d4-8033-6e40c653cf1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b\"" Oct 2 20:00:16.683487 env[1333]: time="2023-10-02T20:00:16.683458204Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 20:00:16.717059 env[1333]: time="2023-10-02T20:00:16.716964082Z" level=info msg="CreateContainer within sandbox \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45\"" Oct 2 20:00:16.718054 env[1333]: time="2023-10-02T20:00:16.718025910Z" level=info msg="StartContainer for \"f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45\"" Oct 2 20:00:16.738661 systemd[1]: Started cri-containerd-f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45.scope. Oct 2 20:00:16.748870 systemd[1]: cri-containerd-f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45.scope: Deactivated successfully. Oct 2 20:00:16.808176 env[1333]: time="2023-10-02T20:00:16.808107771Z" level=info msg="shim disconnected" id=f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45 Oct 2 20:00:16.808176 env[1333]: time="2023-10-02T20:00:16.808174873Z" level=warning msg="cleaning up after shim disconnected" id=f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45 namespace=k8s.io Oct 2 20:00:16.808461 env[1333]: time="2023-10-02T20:00:16.808186373Z" level=info msg="cleaning up dead shim" Oct 2 20:00:16.816398 env[1333]: time="2023-10-02T20:00:16.816359588Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2745 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:00:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:00:16.816724 env[1333]: time="2023-10-02T20:00:16.816674396Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 20:00:16.818634 env[1333]: time="2023-10-02T20:00:16.818585746Z" level=error msg="Failed to pipe stdout of container \"f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45\"" error="reading from a closed fifo" Oct 2 20:00:16.822607 env[1333]: time="2023-10-02T20:00:16.822562550Z" level=error msg="Failed to pipe stderr of container \"f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45\"" error="reading from a closed fifo" Oct 2 20:00:16.831738 env[1333]: time="2023-10-02T20:00:16.831686090Z" level=error msg="StartContainer for \"f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:00:16.831961 kubelet[1918]: E1002 20:00:16.831940 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45" Oct 2 20:00:16.832082 kubelet[1918]: E1002 20:00:16.832067 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:00:16.832082 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:00:16.832082 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 20:00:16.832082 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gmmkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:00:16.832300 kubelet[1918]: E1002 20:00:16.832115 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dlm7z" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 Oct 2 20:00:16.954238 kubelet[1918]: E1002 20:00:16.954198 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:17.502891 env[1333]: time="2023-10-02T20:00:17.502848837Z" level=info msg="CreateContainer within sandbox \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:00:17.526352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804216736.mount: Deactivated successfully. Oct 2 20:00:17.532877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829672102.mount: Deactivated successfully. Oct 2 20:00:17.547892 env[1333]: time="2023-10-02T20:00:17.547844321Z" level=info msg="CreateContainer within sandbox \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\"" Oct 2 20:00:17.548685 env[1333]: time="2023-10-02T20:00:17.548650742Z" level=info msg="StartContainer for \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\"" Oct 2 20:00:17.564643 systemd[1]: Started cri-containerd-c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af.scope. Oct 2 20:00:17.577227 systemd[1]: cri-containerd-c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af.scope: Deactivated successfully. Oct 2 20:00:17.599024 env[1333]: time="2023-10-02T20:00:17.598971467Z" level=info msg="shim disconnected" id=c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af Oct 2 20:00:17.599024 env[1333]: time="2023-10-02T20:00:17.599023468Z" level=warning msg="cleaning up after shim disconnected" id=c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af namespace=k8s.io Oct 2 20:00:17.599290 env[1333]: time="2023-10-02T20:00:17.599033968Z" level=info msg="cleaning up dead shim" Oct 2 20:00:17.607067 env[1333]: time="2023-10-02T20:00:17.607027379Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2782 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:00:17Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:00:17.607318 env[1333]: time="2023-10-02T20:00:17.607268485Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Oct 2 20:00:17.608608 env[1333]: time="2023-10-02T20:00:17.608568319Z" level=error msg="Failed to pipe stdout of container \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\"" error="reading from a closed fifo" Oct 2 20:00:17.609638 env[1333]: time="2023-10-02T20:00:17.609595446Z" level=error msg="Failed to pipe stderr of container \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\"" error="reading from a closed fifo" Oct 2 20:00:17.614979 env[1333]: time="2023-10-02T20:00:17.614932787Z" level=error msg="StartContainer for \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:00:17.615213 kubelet[1918]: E1002 20:00:17.615190 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af" Oct 2 20:00:17.615479 kubelet[1918]: E1002 20:00:17.615372 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:00:17.615479 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:00:17.615479 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 20:00:17.615479 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gmmkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:00:17.615733 kubelet[1918]: E1002 20:00:17.615453 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dlm7z" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 Oct 2 20:00:17.955261 kubelet[1918]: E1002 20:00:17.955199 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:18.011294 kubelet[1918]: E1002 20:00:18.011260 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:18.505235 kubelet[1918]: I1002 20:00:18.505202 1918 scope.go:115] "RemoveContainer" containerID="f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45" Oct 2 20:00:18.505628 kubelet[1918]: I1002 20:00:18.505610 1918 scope.go:115] "RemoveContainer" containerID="f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45" Oct 2 20:00:18.506907 env[1333]: time="2023-10-02T20:00:18.506865413Z" level=info msg="RemoveContainer for \"f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45\"" Oct 2 20:00:18.521929 env[1333]: time="2023-10-02T20:00:18.521895010Z" level=info msg="RemoveContainer for \"f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45\" returns successfully" Oct 2 20:00:18.522392 env[1333]: time="2023-10-02T20:00:18.522359922Z" level=info msg="RemoveContainer for \"f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45\"" Oct 2 20:00:18.522480 env[1333]: time="2023-10-02T20:00:18.522395123Z" level=info msg="RemoveContainer for \"f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45\" returns successfully" Oct 2 20:00:18.522822 kubelet[1918]: E1002 20:00:18.522805 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0)\"" pod="kube-system/cilium-dlm7z" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 Oct 2 20:00:18.956000 kubelet[1918]: E1002 20:00:18.955943 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:19.053061 env[1333]: time="2023-10-02T20:00:19.053011247Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:00:19.061113 env[1333]: time="2023-10-02T20:00:19.061070661Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:00:19.066152 env[1333]: time="2023-10-02T20:00:19.066115994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:00:19.066623 env[1333]: time="2023-10-02T20:00:19.066590607Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\"" Oct 2 20:00:19.068748 env[1333]: time="2023-10-02T20:00:19.068718663Z" level=info msg="CreateContainer within sandbox \"4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 20:00:19.095208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437780918.mount: Deactivated successfully. Oct 2 20:00:19.101476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603291950.mount: Deactivated successfully. Oct 2 20:00:19.121640 env[1333]: time="2023-10-02T20:00:19.121595666Z" level=info msg="CreateContainer within sandbox \"4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\"" Oct 2 20:00:19.122215 env[1333]: time="2023-10-02T20:00:19.122182681Z" level=info msg="StartContainer for \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\"" Oct 2 20:00:19.137772 systemd[1]: Started cri-containerd-115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987.scope. Oct 2 20:00:19.150000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.150000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.150000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.150000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.150000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.150000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.150000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.150000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.150000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.150000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.150000 audit: BPF prog-id=91 op=LOAD Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014dc48 a2=10 a3=1c items=0 ppid=2661 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:19.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131356463653532323436346466333633373734386434393961613335 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=2661 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:19.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131356463653532323436346466333633373734386434393961613335 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit: BPF prog-id=92 op=LOAD Oct 2 20:00:19.151000 audit[2802]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014d9d8 a2=78 a3=c00028cde0 items=0 ppid=2661 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:19.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131356463653532323436346466333633373734386434393961613335 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit: BPF prog-id=93 op=LOAD Oct 2 20:00:19.151000 audit[2802]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00014d770 a2=78 a3=c00028ce28 items=0 ppid=2661 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:19.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131356463653532323436346466333633373734386434393961613335 Oct 2 20:00:19.151000 audit: BPF prog-id=93 op=UNLOAD Oct 2 20:00:19.151000 audit: BPF prog-id=92 op=UNLOAD Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { perfmon } for pid=2802 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit[2802]: AVC avc: denied { bpf } for pid=2802 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:00:19.151000 audit: BPF prog-id=94 op=LOAD Oct 2 20:00:19.151000 audit[2802]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014dc30 a2=78 a3=c00028d238 items=0 ppid=2661 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:00:19.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131356463653532323436346466333633373734386434393961613335 Oct 2 20:00:19.169407 env[1333]: time="2023-10-02T20:00:19.169368633Z" level=info msg="StartContainer for \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\" returns successfully" Oct 2 20:00:19.190000 audit[2812]: AVC avc: denied { map_create } for pid=2812 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c110,c796 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c110,c796 tclass=bpf permissive=0 Oct 2 20:00:19.190000 audit[2812]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c00068f7d0 a2=48 a3=0 items=0 ppid=2661 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c110,c796 key=(null) Oct 2 20:00:19.190000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 20:00:19.509040 kubelet[1918]: E1002 20:00:19.508996 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0)\"" pod="kube-system/cilium-dlm7z" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 Oct 2 20:00:19.913356 kubelet[1918]: W1002 20:00:19.913309 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac9e8ee3_388e_4b01_99a8_7b990e1a07c0.slice/cri-containerd-f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45.scope WatchSource:0}: container "f11c4bd5f4502d40f02dcfadf04cba32b46e0d223fde693bb3ad35bf3adc4f45" in namespace "k8s.io": not found Oct 2 20:00:19.956472 kubelet[1918]: E1002 20:00:19.956417 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:20.957186 kubelet[1918]: E1002 20:00:20.957128 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:21.957547 kubelet[1918]: E1002 20:00:21.957475 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:22.958694 kubelet[1918]: E1002 20:00:22.958631 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:23.012762 kubelet[1918]: E1002 20:00:23.012725 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:23.021511 kubelet[1918]: W1002 20:00:23.021480 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac9e8ee3_388e_4b01_99a8_7b990e1a07c0.slice/cri-containerd-c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af.scope WatchSource:0}: task c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af not found: not found Oct 2 20:00:23.959616 kubelet[1918]: E1002 20:00:23.959560 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:24.960661 kubelet[1918]: E1002 20:00:24.960602 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:25.961130 kubelet[1918]: E1002 20:00:25.961073 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:26.961666 kubelet[1918]: E1002 20:00:26.961610 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:27.789109 kubelet[1918]: E1002 20:00:27.789052 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:27.802104 env[1333]: time="2023-10-02T20:00:27.802053601Z" level=info msg="StopPodSandbox for \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\"" Oct 2 20:00:27.802505 env[1333]: time="2023-10-02T20:00:27.802170104Z" level=info msg="TearDown network for sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" successfully" Oct 2 20:00:27.802505 env[1333]: time="2023-10-02T20:00:27.802226206Z" level=info msg="StopPodSandbox for \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" returns successfully" Oct 2 20:00:27.803081 env[1333]: time="2023-10-02T20:00:27.803049128Z" level=info msg="RemovePodSandbox for \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\"" Oct 2 20:00:27.803221 env[1333]: time="2023-10-02T20:00:27.803083629Z" level=info msg="Forcibly stopping sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\"" Oct 2 20:00:27.803221 env[1333]: time="2023-10-02T20:00:27.803179332Z" level=info msg="TearDown network for sandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" successfully" Oct 2 20:00:27.813122 env[1333]: time="2023-10-02T20:00:27.813091102Z" level=info msg="RemovePodSandbox \"254d08834c3630152b336531273858cbf70f5d65ae1291b0aa614e2e81ad974b\" returns successfully" Oct 2 20:00:27.813468 env[1333]: time="2023-10-02T20:00:27.813435911Z" level=info msg="StopPodSandbox for \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\"" Oct 2 20:00:27.813576 env[1333]: time="2023-10-02T20:00:27.813518314Z" level=info msg="TearDown network for sandbox \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\" successfully" Oct 2 20:00:27.813631 env[1333]: time="2023-10-02T20:00:27.813573015Z" level=info msg="StopPodSandbox for \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\" returns successfully" Oct 2 20:00:27.813829 env[1333]: time="2023-10-02T20:00:27.813805722Z" level=info msg="RemovePodSandbox for \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\"" Oct 2 20:00:27.813944 env[1333]: time="2023-10-02T20:00:27.813911124Z" level=info msg="Forcibly stopping sandbox \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\"" Oct 2 20:00:27.814017 env[1333]: time="2023-10-02T20:00:27.813984226Z" level=info msg="TearDown network for sandbox \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\" successfully" Oct 2 20:00:27.822726 env[1333]: time="2023-10-02T20:00:27.822654763Z" level=info msg="RemovePodSandbox \"d4026d7731f8e9372c01b113800a1d327050f848ab0a60de3405b1b805560c6c\" returns successfully" Oct 2 20:00:27.962541 kubelet[1918]: E1002 20:00:27.962485 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:28.014116 kubelet[1918]: E1002 20:00:28.014087 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:28.962912 kubelet[1918]: E1002 20:00:28.962856 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:29.963507 kubelet[1918]: E1002 20:00:29.963452 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:30.964278 kubelet[1918]: E1002 20:00:30.964225 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:31.091643 env[1333]: time="2023-10-02T20:00:31.091601308Z" level=info msg="CreateContainer within sandbox \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:00:31.137186 env[1333]: time="2023-10-02T20:00:31.137133365Z" level=info msg="CreateContainer within sandbox \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\"" Oct 2 20:00:31.137785 env[1333]: time="2023-10-02T20:00:31.137748482Z" level=info msg="StartContainer for \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\"" Oct 2 20:00:31.158024 systemd[1]: run-containerd-runc-k8s.io-17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a-runc.SO3LcA.mount: Deactivated successfully. Oct 2 20:00:31.161372 systemd[1]: Started cri-containerd-17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a.scope. Oct 2 20:00:31.171279 systemd[1]: cri-containerd-17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a.scope: Deactivated successfully. Oct 2 20:00:31.171599 systemd[1]: Stopped cri-containerd-17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a.scope. Oct 2 20:00:31.651358 env[1333]: time="2023-10-02T20:00:31.651298653Z" level=info msg="shim disconnected" id=17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a Oct 2 20:00:31.651358 env[1333]: time="2023-10-02T20:00:31.651356155Z" level=warning msg="cleaning up after shim disconnected" id=17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a namespace=k8s.io Oct 2 20:00:31.651358 env[1333]: time="2023-10-02T20:00:31.651367555Z" level=info msg="cleaning up dead shim" Oct 2 20:00:31.658771 env[1333]: time="2023-10-02T20:00:31.658725858Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2860 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:00:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:00:31.659017 env[1333]: time="2023-10-02T20:00:31.658967165Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 20:00:31.659222 env[1333]: time="2023-10-02T20:00:31.659188271Z" level=error msg="Failed to pipe stderr of container \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\"" error="reading from a closed fifo" Oct 2 20:00:31.659662 env[1333]: time="2023-10-02T20:00:31.659620883Z" level=error msg="Failed to pipe stdout of container \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\"" error="reading from a closed fifo" Oct 2 20:00:31.665112 env[1333]: time="2023-10-02T20:00:31.665073933Z" level=error msg="StartContainer for \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:00:31.665303 kubelet[1918]: E1002 20:00:31.665270 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a" Oct 2 20:00:31.665416 kubelet[1918]: E1002 20:00:31.665386 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:00:31.665416 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:00:31.665416 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 20:00:31.665416 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gmmkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:00:31.665693 kubelet[1918]: E1002 20:00:31.665433 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dlm7z" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 Oct 2 20:00:31.964942 kubelet[1918]: E1002 20:00:31.964812 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:32.117971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a-rootfs.mount: Deactivated successfully. Oct 2 20:00:32.536644 kubelet[1918]: I1002 20:00:32.536608 1918 scope.go:115] "RemoveContainer" containerID="c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af" Oct 2 20:00:32.537046 kubelet[1918]: I1002 20:00:32.537023 1918 scope.go:115] "RemoveContainer" containerID="c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af" Oct 2 20:00:32.538830 env[1333]: time="2023-10-02T20:00:32.538773285Z" level=info msg="RemoveContainer for \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\"" Oct 2 20:00:32.539322 env[1333]: time="2023-10-02T20:00:32.539278999Z" level=info msg="RemoveContainer for \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\"" Oct 2 20:00:32.539444 env[1333]: time="2023-10-02T20:00:32.539398403Z" level=error msg="RemoveContainer for \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\" failed" error="failed to set removing state for container \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\": container is already in removing state" Oct 2 20:00:32.539677 kubelet[1918]: E1002 20:00:32.539655 1918 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\": container is already in removing state" containerID="c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af" Oct 2 20:00:32.539780 kubelet[1918]: I1002 20:00:32.539698 1918 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af} err="rpc error: code = Unknown desc = failed to set removing state for container \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\": container is already in removing state" Oct 2 20:00:32.550640 env[1333]: time="2023-10-02T20:00:32.550607613Z" level=info msg="RemoveContainer for \"c4571fcef80b111d4057041cb5f1fbfed6b32b05af56c6f9eae8792b389cf1af\" returns successfully" Oct 2 20:00:32.551031 kubelet[1918]: E1002 20:00:32.551009 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0)\"" pod="kube-system/cilium-dlm7z" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 Oct 2 20:00:32.965947 kubelet[1918]: E1002 20:00:32.965888 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:33.015024 kubelet[1918]: E1002 20:00:33.014986 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:33.966550 kubelet[1918]: E1002 20:00:33.966485 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:34.757163 kubelet[1918]: W1002 20:00:34.757113 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac9e8ee3_388e_4b01_99a8_7b990e1a07c0.slice/cri-containerd-17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a.scope WatchSource:0}: task 17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a not found: not found Oct 2 20:00:34.967279 kubelet[1918]: E1002 20:00:34.967224 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:35.967780 kubelet[1918]: E1002 20:00:35.967723 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:36.968239 kubelet[1918]: E1002 20:00:36.968186 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:37.968651 kubelet[1918]: E1002 20:00:37.968590 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:38.016318 kubelet[1918]: E1002 20:00:38.016285 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:38.969742 kubelet[1918]: E1002 20:00:38.969684 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:39.970536 kubelet[1918]: E1002 20:00:39.970421 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:40.971427 kubelet[1918]: E1002 20:00:40.971368 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:41.972470 kubelet[1918]: E1002 20:00:41.972411 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:42.972954 kubelet[1918]: E1002 20:00:42.972916 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:43.017965 kubelet[1918]: E1002 20:00:43.017936 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:43.973147 kubelet[1918]: E1002 20:00:43.973077 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:44.973303 kubelet[1918]: E1002 20:00:44.973253 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:45.974373 kubelet[1918]: E1002 20:00:45.974324 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:46.975035 kubelet[1918]: E1002 20:00:46.974974 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:47.089535 kubelet[1918]: E1002 20:00:47.089477 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0)\"" pod="kube-system/cilium-dlm7z" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 Oct 2 20:00:47.788869 kubelet[1918]: E1002 20:00:47.788782 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:47.975944 kubelet[1918]: E1002 20:00:47.975886 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:48.019650 kubelet[1918]: E1002 20:00:48.019617 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:48.976214 kubelet[1918]: E1002 20:00:48.976166 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:49.976840 kubelet[1918]: E1002 20:00:49.976783 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:50.977399 kubelet[1918]: E1002 20:00:50.977342 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:51.978094 kubelet[1918]: E1002 20:00:51.978034 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:52.978940 kubelet[1918]: E1002 20:00:52.978877 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:53.020708 kubelet[1918]: E1002 20:00:53.020678 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:53.979731 kubelet[1918]: E1002 20:00:53.979676 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:54.980391 kubelet[1918]: E1002 20:00:54.980328 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:55.980747 kubelet[1918]: E1002 20:00:55.980692 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:56.981206 kubelet[1918]: E1002 20:00:56.981147 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:57.981583 kubelet[1918]: E1002 20:00:57.981513 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:58.022240 kubelet[1918]: E1002 20:00:58.022208 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:58.091848 env[1333]: time="2023-10-02T20:00:58.091800040Z" level=info msg="CreateContainer within sandbox \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:00:58.125389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842556899.mount: Deactivated successfully. Oct 2 20:00:58.138846 env[1333]: time="2023-10-02T20:00:58.138793921Z" level=info msg="CreateContainer within sandbox \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a\"" Oct 2 20:00:58.139458 env[1333]: time="2023-10-02T20:00:58.139427440Z" level=info msg="StartContainer for \"b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a\"" Oct 2 20:00:58.162200 systemd[1]: Started cri-containerd-b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a.scope. Oct 2 20:00:58.172136 systemd[1]: cri-containerd-b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a.scope: Deactivated successfully. Oct 2 20:00:58.172465 systemd[1]: Stopped cri-containerd-b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a.scope. Oct 2 20:00:58.194070 env[1333]: time="2023-10-02T20:00:58.194009144Z" level=info msg="shim disconnected" id=b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a Oct 2 20:00:58.194070 env[1333]: time="2023-10-02T20:00:58.194069246Z" level=warning msg="cleaning up after shim disconnected" id=b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a namespace=k8s.io Oct 2 20:00:58.194365 env[1333]: time="2023-10-02T20:00:58.194080246Z" level=info msg="cleaning up dead shim" Oct 2 20:00:58.203098 env[1333]: time="2023-10-02T20:00:58.203054710Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2901 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:00:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:00:58.203370 env[1333]: time="2023-10-02T20:00:58.203313517Z" level=error msg="copy shim log" error="read /proc/self/fd/49: file already closed" Oct 2 20:00:58.203817 env[1333]: time="2023-10-02T20:00:58.203771931Z" level=error msg="Failed to pipe stdout of container \"b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a\"" error="reading from a closed fifo" Oct 2 20:00:58.204550 env[1333]: time="2023-10-02T20:00:58.203886534Z" level=error msg="Failed to pipe stderr of container \"b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a\"" error="reading from a closed fifo" Oct 2 20:00:58.209864 env[1333]: time="2023-10-02T20:00:58.209820509Z" level=error msg="StartContainer for \"b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:00:58.210047 kubelet[1918]: E1002 20:00:58.210025 1918 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a" Oct 2 20:00:58.210444 kubelet[1918]: E1002 20:00:58.210424 1918 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:00:58.210444 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:00:58.210444 kubelet[1918]: rm /hostbin/cilium-mount Oct 2 20:00:58.210444 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gmmkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:00:58.210685 kubelet[1918]: E1002 20:00:58.210475 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dlm7z" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 Oct 2 20:00:58.583997 kubelet[1918]: I1002 20:00:58.583958 1918 scope.go:115] "RemoveContainer" containerID="17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a" Oct 2 20:00:58.584413 kubelet[1918]: I1002 20:00:58.584382 1918 scope.go:115] "RemoveContainer" containerID="17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a" Oct 2 20:00:58.585880 env[1333]: time="2023-10-02T20:00:58.585837459Z" level=info msg="RemoveContainer for \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\"" Oct 2 20:00:58.586416 env[1333]: time="2023-10-02T20:00:58.586374575Z" level=info msg="RemoveContainer for \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\"" Oct 2 20:00:58.586544 env[1333]: time="2023-10-02T20:00:58.586475978Z" level=error msg="RemoveContainer for \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\" failed" error="failed to set removing state for container \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\": container is already in removing state" Oct 2 20:00:58.586734 kubelet[1918]: E1002 20:00:58.586702 1918 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\": container is already in removing state" containerID="17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a" Oct 2 20:00:58.586829 kubelet[1918]: E1002 20:00:58.586740 1918 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a": container is already in removing state; Skipping pod "cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0)" Oct 2 20:00:58.587073 kubelet[1918]: E1002 20:00:58.587039 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0)\"" pod="kube-system/cilium-dlm7z" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 Oct 2 20:00:58.606409 env[1333]: time="2023-10-02T20:00:58.606377662Z" level=info msg="RemoveContainer for \"17f2bedff180818a6807f9761ddb544f0b9ff74f187b5b901cfd598f0efbef1a\" returns successfully" Oct 2 20:00:58.982446 kubelet[1918]: E1002 20:00:58.982305 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:59.119156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a-rootfs.mount: Deactivated successfully. Oct 2 20:00:59.983245 kubelet[1918]: E1002 20:00:59.983179 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:00.984090 kubelet[1918]: E1002 20:01:00.984035 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:01.299892 kubelet[1918]: W1002 20:01:01.299849 1918 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac9e8ee3_388e_4b01_99a8_7b990e1a07c0.slice/cri-containerd-b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a.scope WatchSource:0}: task b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a not found: not found Oct 2 20:01:01.985200 kubelet[1918]: E1002 20:01:01.985140 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:02.985424 kubelet[1918]: E1002 20:01:02.985373 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:03.023159 kubelet[1918]: E1002 20:01:03.023110 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:01:03.986593 kubelet[1918]: E1002 20:01:03.986520 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:04.987510 kubelet[1918]: E1002 20:01:04.987452 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:05.988029 kubelet[1918]: E1002 20:01:05.987972 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:06.988648 kubelet[1918]: E1002 20:01:06.988590 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:07.788736 kubelet[1918]: E1002 20:01:07.788681 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:07.989039 kubelet[1918]: E1002 20:01:07.988978 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:08.024739 kubelet[1918]: E1002 20:01:08.024694 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:01:08.990023 kubelet[1918]: E1002 20:01:08.989966 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:09.990824 kubelet[1918]: E1002 20:01:09.990769 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:10.991321 kubelet[1918]: E1002 20:01:10.991262 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:11.991742 kubelet[1918]: E1002 20:01:11.991689 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:12.992793 kubelet[1918]: E1002 20:01:12.992734 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:13.025873 kubelet[1918]: E1002 20:01:13.025839 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:01:13.089627 kubelet[1918]: E1002 20:01:13.089586 1918 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-dlm7z_kube-system(ac9e8ee3-388e-4b01-99a8-7b990e1a07c0)\"" pod="kube-system/cilium-dlm7z" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 Oct 2 20:01:13.993849 kubelet[1918]: E1002 20:01:13.993793 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:14.994550 kubelet[1918]: E1002 20:01:14.994485 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:15.995333 kubelet[1918]: E1002 20:01:15.995279 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:16.995893 kubelet[1918]: E1002 20:01:16.995835 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:17.314609 env[1333]: time="2023-10-02T20:01:17.314558600Z" level=info msg="StopPodSandbox for \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\"" Oct 2 20:01:17.317666 env[1333]: time="2023-10-02T20:01:17.314640002Z" level=info msg="Container to stop \"b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:01:17.317037 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d-shm.mount: Deactivated successfully. Oct 2 20:01:17.333989 kernel: kauditd_printk_skb: 168 callbacks suppressed Oct 2 20:01:17.334086 kernel: audit: type=1334 audit(1696276877.323:741): prog-id=83 op=UNLOAD Oct 2 20:01:17.323000 audit: BPF prog-id=83 op=UNLOAD Oct 2 20:01:17.324690 systemd[1]: cri-containerd-8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d.scope: Deactivated successfully. Oct 2 20:01:17.334748 env[1333]: time="2023-10-02T20:01:17.334710810Z" level=info msg="StopContainer for \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\" with timeout 30 (s)" Oct 2 20:01:17.335123 env[1333]: time="2023-10-02T20:01:17.335092921Z" level=info msg="Stop container \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\" with signal terminated" Oct 2 20:01:17.336000 audit: BPF prog-id=86 op=UNLOAD Oct 2 20:01:17.344554 kernel: audit: type=1334 audit(1696276877.336:742): prog-id=86 op=UNLOAD Oct 2 20:01:17.349000 audit: BPF prog-id=91 op=UNLOAD Oct 2 20:01:17.351104 systemd[1]: cri-containerd-115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987.scope: Deactivated successfully. Oct 2 20:01:17.356542 kernel: audit: type=1334 audit(1696276877.349:743): prog-id=91 op=UNLOAD Oct 2 20:01:17.358000 audit: BPF prog-id=94 op=UNLOAD Oct 2 20:01:17.364776 kernel: audit: type=1334 audit(1696276877.358:744): prog-id=94 op=UNLOAD Oct 2 20:01:17.367658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d-rootfs.mount: Deactivated successfully. Oct 2 20:01:17.383653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987-rootfs.mount: Deactivated successfully. Oct 2 20:01:17.426097 env[1333]: time="2023-10-02T20:01:17.426041173Z" level=info msg="shim disconnected" id=115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987 Oct 2 20:01:17.426323 env[1333]: time="2023-10-02T20:01:17.426099975Z" level=warning msg="cleaning up after shim disconnected" id=115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987 namespace=k8s.io Oct 2 20:01:17.426323 env[1333]: time="2023-10-02T20:01:17.426112676Z" level=info msg="cleaning up dead shim" Oct 2 20:01:17.426617 env[1333]: time="2023-10-02T20:01:17.426581690Z" level=info msg="shim disconnected" id=8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d Oct 2 20:01:17.426745 env[1333]: time="2023-10-02T20:01:17.426725794Z" level=warning msg="cleaning up after shim disconnected" id=8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d namespace=k8s.io Oct 2 20:01:17.426824 env[1333]: time="2023-10-02T20:01:17.426809497Z" level=info msg="cleaning up dead shim" Oct 2 20:01:17.437877 env[1333]: time="2023-10-02T20:01:17.437841331Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:01:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2952 runtime=io.containerd.runc.v2\n" Oct 2 20:01:17.438242 env[1333]: time="2023-10-02T20:01:17.438211242Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:01:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2953 runtime=io.containerd.runc.v2\n" Oct 2 20:01:17.438601 env[1333]: time="2023-10-02T20:01:17.438513551Z" level=info msg="TearDown network for sandbox \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\" successfully" Oct 2 20:01:17.438685 env[1333]: time="2023-10-02T20:01:17.438602154Z" level=info msg="StopPodSandbox for \"8e45664f0ac7be11e5db273fbd964cb761e7ca3ee28a8f3d4d314dbbb900e57d\" returns successfully" Oct 2 20:01:17.444336 env[1333]: time="2023-10-02T20:01:17.444293926Z" level=info msg="StopContainer for \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\" returns successfully" Oct 2 20:01:17.444813 env[1333]: time="2023-10-02T20:01:17.444786341Z" level=info msg="StopPodSandbox for \"4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b\"" Oct 2 20:01:17.444902 env[1333]: time="2023-10-02T20:01:17.444834542Z" level=info msg="Container to stop \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:01:17.446642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b-shm.mount: Deactivated successfully. Oct 2 20:01:17.452097 systemd[1]: cri-containerd-4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b.scope: Deactivated successfully. Oct 2 20:01:17.457583 kernel: audit: type=1334 audit(1696276877.450:745): prog-id=87 op=UNLOAD Oct 2 20:01:17.450000 audit: BPF prog-id=87 op=UNLOAD Oct 2 20:01:17.459000 audit: BPF prog-id=90 op=UNLOAD Oct 2 20:01:17.466560 kernel: audit: type=1334 audit(1696276877.459:746): prog-id=90 op=UNLOAD Oct 2 20:01:17.476009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b-rootfs.mount: Deactivated successfully. Oct 2 20:01:17.498941 env[1333]: time="2023-10-02T20:01:17.498889078Z" level=info msg="shim disconnected" id=4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b Oct 2 20:01:17.499154 env[1333]: time="2023-10-02T20:01:17.499132185Z" level=warning msg="cleaning up after shim disconnected" id=4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b namespace=k8s.io Oct 2 20:01:17.499217 env[1333]: time="2023-10-02T20:01:17.499154986Z" level=info msg="cleaning up dead shim" Oct 2 20:01:17.507717 env[1333]: time="2023-10-02T20:01:17.507680844Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:01:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2998 runtime=io.containerd.runc.v2\n" Oct 2 20:01:17.508032 env[1333]: time="2023-10-02T20:01:17.507998354Z" level=info msg="TearDown network for sandbox \"4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b\" successfully" Oct 2 20:01:17.508032 env[1333]: time="2023-10-02T20:01:17.508027355Z" level=info msg="StopPodSandbox for \"4c3c9fbf8a782608561ce6101beaef01dc85ca13af1b5d6ce50b8d49c557a62b\" returns successfully" Oct 2 20:01:17.526201 kubelet[1918]: I1002 20:01:17.526169 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-hostproc\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526375 kubelet[1918]: I1002 20:01:17.526211 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-clustermesh-secrets\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526375 kubelet[1918]: I1002 20:01:17.526234 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-cgroup\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526375 kubelet[1918]: I1002 20:01:17.526263 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmmkf\" (UniqueName: \"kubernetes.io/projected/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-kube-api-access-gmmkf\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526375 kubelet[1918]: I1002 20:01:17.526286 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-hubble-tls\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526375 kubelet[1918]: I1002 20:01:17.526310 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-host-proc-sys-kernel\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526375 kubelet[1918]: I1002 20:01:17.526333 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cni-path\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526953 kubelet[1918]: I1002 20:01:17.526359 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-host-proc-sys-net\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526953 kubelet[1918]: I1002 20:01:17.526383 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-run\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526953 kubelet[1918]: I1002 20:01:17.526406 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-bpf-maps\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526953 kubelet[1918]: I1002 20:01:17.526432 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-config-path\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526953 kubelet[1918]: I1002 20:01:17.526460 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-ipsec-secrets\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.526953 kubelet[1918]: I1002 20:01:17.526486 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-etc-cni-netd\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.527224 kubelet[1918]: I1002 20:01:17.526510 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-lib-modules\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.527224 kubelet[1918]: I1002 20:01:17.526554 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-xtables-lock\") pod \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\" (UID: \"ac9e8ee3-388e-4b01-99a8-7b990e1a07c0\") " Oct 2 20:01:17.527224 kubelet[1918]: I1002 20:01:17.526585 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cni-path" (OuterVolumeSpecName: "cni-path") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:01:17.527224 kubelet[1918]: I1002 20:01:17.526593 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:01:17.527224 kubelet[1918]: I1002 20:01:17.526618 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:01:17.527461 kubelet[1918]: I1002 20:01:17.526631 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-hostproc" (OuterVolumeSpecName: "hostproc") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:01:17.527461 kubelet[1918]: I1002 20:01:17.526640 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:01:17.527461 kubelet[1918]: I1002 20:01:17.526660 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:01:17.527461 kubelet[1918]: I1002 20:01:17.526978 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:01:17.527461 kubelet[1918]: I1002 20:01:17.527022 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:01:17.527718 kubelet[1918]: I1002 20:01:17.527519 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:01:17.527718 kubelet[1918]: I1002 20:01:17.527569 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:01:17.528552 kubelet[1918]: W1002 20:01:17.528093 1918 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:01:17.530711 kubelet[1918]: I1002 20:01:17.530686 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:01:17.531389 kubelet[1918]: I1002 20:01:17.531366 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-kube-api-access-gmmkf" (OuterVolumeSpecName: "kube-api-access-gmmkf") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "kube-api-access-gmmkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:01:17.534203 kubelet[1918]: I1002 20:01:17.534175 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:01:17.534291 kubelet[1918]: I1002 20:01:17.534250 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:01:17.536148 kubelet[1918]: I1002 20:01:17.536123 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0" (UID: "ac9e8ee3-388e-4b01-99a8-7b990e1a07c0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:01:17.618565 kubelet[1918]: I1002 20:01:17.617390 1918 scope.go:115] "RemoveContainer" containerID="b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a" Oct 2 20:01:17.619860 env[1333]: time="2023-10-02T20:01:17.619813337Z" level=info msg="RemoveContainer for \"b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a\"" Oct 2 20:01:17.623488 systemd[1]: Removed slice kubepods-burstable-podac9e8ee3_388e_4b01_99a8_7b990e1a07c0.slice. Oct 2 20:01:17.626813 kubelet[1918]: I1002 20:01:17.626789 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg64d\" (UniqueName: \"kubernetes.io/projected/b677cfbb-8bde-41d4-8033-6e40c653cf1c-kube-api-access-kg64d\") pod \"b677cfbb-8bde-41d4-8033-6e40c653cf1c\" (UID: \"b677cfbb-8bde-41d4-8033-6e40c653cf1c\") " Oct 2 20:01:17.627051 kubelet[1918]: I1002 20:01:17.627033 1918 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b677cfbb-8bde-41d4-8033-6e40c653cf1c-cilium-config-path\") pod \"b677cfbb-8bde-41d4-8033-6e40c653cf1c\" (UID: \"b677cfbb-8bde-41d4-8033-6e40c653cf1c\") " Oct 2 20:01:17.627144 kubelet[1918]: I1002 20:01:17.627075 1918 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-cgroup\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627144 kubelet[1918]: I1002 20:01:17.627093 1918 reconciler.go:399] "Volume detached for volume \"kube-api-access-gmmkf\" (UniqueName: \"kubernetes.io/projected/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-kube-api-access-gmmkf\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627144 kubelet[1918]: I1002 20:01:17.627108 1918 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-hubble-tls\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627144 kubelet[1918]: I1002 20:01:17.627123 1918 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-host-proc-sys-kernel\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627144 kubelet[1918]: I1002 20:01:17.627136 1918 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-hostproc\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627356 kubelet[1918]: I1002 20:01:17.627149 1918 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-clustermesh-secrets\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627356 kubelet[1918]: I1002 20:01:17.627162 1918 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cni-path\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627356 kubelet[1918]: I1002 20:01:17.627176 1918 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-host-proc-sys-net\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627356 kubelet[1918]: I1002 20:01:17.627191 1918 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-run\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627356 kubelet[1918]: I1002 20:01:17.627205 1918 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-config-path\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627356 kubelet[1918]: I1002 20:01:17.627219 1918 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-cilium-ipsec-secrets\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627356 kubelet[1918]: I1002 20:01:17.627232 1918 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-etc-cni-netd\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627356 kubelet[1918]: I1002 20:01:17.627245 1918 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-lib-modules\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627725 kubelet[1918]: I1002 20:01:17.627258 1918 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-bpf-maps\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627725 kubelet[1918]: I1002 20:01:17.627272 1918 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0-xtables-lock\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.627725 kubelet[1918]: W1002 20:01:17.627436 1918 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/b677cfbb-8bde-41d4-8033-6e40c653cf1c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:01:17.629376 kubelet[1918]: I1002 20:01:17.629341 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b677cfbb-8bde-41d4-8033-6e40c653cf1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b677cfbb-8bde-41d4-8033-6e40c653cf1c" (UID: "b677cfbb-8bde-41d4-8033-6e40c653cf1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:01:17.632955 env[1333]: time="2023-10-02T20:01:17.632919134Z" level=info msg="RemoveContainer for \"b5007be630520ed76a871083af04cacde7e63f8182721177a776411240b1c18a\" returns successfully" Oct 2 20:01:17.636883 kubelet[1918]: I1002 20:01:17.636852 1918 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b677cfbb-8bde-41d4-8033-6e40c653cf1c-kube-api-access-kg64d" (OuterVolumeSpecName: "kube-api-access-kg64d") pod "b677cfbb-8bde-41d4-8033-6e40c653cf1c" (UID: "b677cfbb-8bde-41d4-8033-6e40c653cf1c"). InnerVolumeSpecName "kube-api-access-kg64d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:01:17.637084 kubelet[1918]: I1002 20:01:17.637064 1918 scope.go:115] "RemoveContainer" containerID="115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987" Oct 2 20:01:17.638642 env[1333]: time="2023-10-02T20:01:17.638610706Z" level=info msg="RemoveContainer for \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\"" Oct 2 20:01:17.649589 env[1333]: time="2023-10-02T20:01:17.649559338Z" level=info msg="RemoveContainer for \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\" returns successfully" Oct 2 20:01:17.649803 kubelet[1918]: I1002 20:01:17.649788 1918 scope.go:115] "RemoveContainer" containerID="115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987" Oct 2 20:01:17.650001 env[1333]: time="2023-10-02T20:01:17.649938249Z" level=error msg="ContainerStatus for \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\": not found" Oct 2 20:01:17.650131 kubelet[1918]: E1002 20:01:17.650115 1918 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\": not found" containerID="115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987" Oct 2 20:01:17.650207 kubelet[1918]: I1002 20:01:17.650147 1918 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987} err="failed to get container status \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\": rpc error: code = NotFound desc = an error occurred when try to find container \"115dce522464df3637748d499aa3519980776f0e5d7cbd60f1fb01ec2b7f7987\": not found" Oct 2 20:01:17.727792 kubelet[1918]: I1002 20:01:17.727734 1918 reconciler.go:399] "Volume detached for volume \"kube-api-access-kg64d\" (UniqueName: \"kubernetes.io/projected/b677cfbb-8bde-41d4-8033-6e40c653cf1c-kube-api-access-kg64d\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.727792 kubelet[1918]: I1002 20:01:17.727781 1918 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b677cfbb-8bde-41d4-8033-6e40c653cf1c-cilium-config-path\") on node \"10.200.8.20\" DevicePath \"\"" Oct 2 20:01:17.924712 systemd[1]: Removed slice kubepods-besteffort-podb677cfbb_8bde_41d4_8033_6e40c653cf1c.slice. Oct 2 20:01:17.996426 kubelet[1918]: E1002 20:01:17.996363 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:18.027348 kubelet[1918]: E1002 20:01:18.027314 1918 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:01:18.092387 kubelet[1918]: I1002 20:01:18.092344 1918 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ac9e8ee3-388e-4b01-99a8-7b990e1a07c0 path="/var/lib/kubelet/pods/ac9e8ee3-388e-4b01-99a8-7b990e1a07c0/volumes" Oct 2 20:01:18.092903 kubelet[1918]: I1002 20:01:18.092881 1918 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b677cfbb-8bde-41d4-8033-6e40c653cf1c path="/var/lib/kubelet/pods/b677cfbb-8bde-41d4-8033-6e40c653cf1c/volumes" Oct 2 20:01:18.316964 systemd[1]: var-lib-kubelet-pods-b677cfbb\x2d8bde\x2d41d4\x2d8033\x2d6e40c653cf1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkg64d.mount: Deactivated successfully. Oct 2 20:01:18.317105 systemd[1]: var-lib-kubelet-pods-ac9e8ee3\x2d388e\x2d4b01\x2d99a8\x2d7b990e1a07c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgmmkf.mount: Deactivated successfully. Oct 2 20:01:18.317207 systemd[1]: var-lib-kubelet-pods-ac9e8ee3\x2d388e\x2d4b01\x2d99a8\x2d7b990e1a07c0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:01:18.317306 systemd[1]: var-lib-kubelet-pods-ac9e8ee3\x2d388e\x2d4b01\x2d99a8\x2d7b990e1a07c0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 20:01:18.317408 systemd[1]: var-lib-kubelet-pods-ac9e8ee3\x2d388e\x2d4b01\x2d99a8\x2d7b990e1a07c0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:01:18.996626 kubelet[1918]: E1002 20:01:18.996569 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:19.997061 kubelet[1918]: E1002 20:01:19.997002 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:20.998197 kubelet[1918]: E1002 20:01:20.998132 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:01:21.999216 kubelet[1918]: E1002 20:01:21.999161 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"