Oct 2 19:26:37.202563 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 2 19:26:37.202600 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:26:37.202622 kernel: efi: EFI v2.70 by EDK II Oct 2 19:26:37.202637 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71accf98 Oct 2 19:26:37.202651 kernel: ACPI: Early table checksum verification disabled Oct 2 19:26:37.202664 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 2 19:26:37.202680 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 2 19:26:37.202713 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 19:26:37.202732 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 2 19:26:37.202747 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 19:26:37.202766 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 2 19:26:37.202780 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 2 19:26:37.202794 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 2 19:26:37.202808 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 19:26:37.202824 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 2 19:26:37.202843 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 2 19:26:37.202857 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 2 19:26:37.202871 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 2 19:26:37.202886 kernel: printk: bootconsole [uart0] enabled Oct 2 19:26:37.202900 kernel: NUMA: Failed to initialise from firmware Oct 2 19:26:37.202914 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:26:37.202929 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Oct 2 19:26:37.202943 kernel: Zone ranges: Oct 2 19:26:37.202957 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 2 19:26:37.202972 kernel: DMA32 empty Oct 2 19:26:37.202986 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 2 19:26:37.203004 kernel: Movable zone start for each node Oct 2 19:26:37.203018 kernel: Early memory node ranges Oct 2 19:26:37.203033 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Oct 2 19:26:37.203047 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 2 19:26:37.203061 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 2 19:26:37.203076 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 2 19:26:37.203090 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 2 19:26:37.203104 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 2 19:26:37.203118 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:26:37.203132 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 2 19:26:37.203147 kernel: psci: probing for conduit method from ACPI. Oct 2 19:26:37.203161 kernel: psci: PSCIv1.0 detected in firmware. Oct 2 19:26:37.203179 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:26:37.203194 kernel: psci: Trusted OS migration not required Oct 2 19:26:37.203214 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:26:37.203230 kernel: ACPI: SRAT not present Oct 2 19:26:37.203245 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:26:37.203264 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:26:37.203279 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 2 19:26:37.203294 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:26:37.203310 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:26:37.203325 kernel: CPU features: detected: Spectre-v2 Oct 2 19:26:37.203340 kernel: CPU features: detected: Spectre-v3a Oct 2 19:26:37.203354 kernel: CPU features: detected: Spectre-BHB Oct 2 19:26:37.203369 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:26:37.203384 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:26:37.203399 kernel: CPU features: detected: ARM erratum 1742098 Oct 2 19:26:37.203414 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 2 19:26:37.203433 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 2 19:26:37.203448 kernel: Policy zone: Normal Oct 2 19:26:37.203466 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:26:37.203482 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:26:37.203497 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:26:37.203512 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:26:37.203527 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:26:37.203542 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 2 19:26:37.203557 kernel: Memory: 3826444K/4030464K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 204020K reserved, 0K cma-reserved) Oct 2 19:26:37.203573 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:26:37.203591 kernel: trace event string verifier disabled Oct 2 19:26:37.203606 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:26:37.203622 kernel: rcu: RCU event tracing is enabled. Oct 2 19:26:37.203638 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:26:37.203653 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:26:37.203668 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:26:37.203683 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:26:37.203726 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:26:37.203743 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:26:37.203759 kernel: GICv3: 96 SPIs implemented Oct 2 19:26:37.203773 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:26:37.203789 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:26:37.203809 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:26:37.203824 kernel: GICv3: 16 PPIs implemented Oct 2 19:26:37.203839 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 2 19:26:37.203854 kernel: ACPI: SRAT not present Oct 2 19:26:37.203868 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 2 19:26:37.203884 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:26:37.203899 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:26:37.203914 kernel: GICv3: using LPI property table @0x00000004000c0000 Oct 2 19:26:37.203929 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 2 19:26:37.203944 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Oct 2 19:26:37.203959 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 2 19:26:37.203978 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 2 19:26:37.203994 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 2 19:26:37.204009 kernel: Console: colour dummy device 80x25 Oct 2 19:26:37.204025 kernel: printk: console [tty1] enabled Oct 2 19:26:37.204040 kernel: ACPI: Core revision 20210730 Oct 2 19:26:37.204056 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 2 19:26:37.204072 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:26:37.204088 kernel: LSM: Security Framework initializing Oct 2 19:26:37.204103 kernel: SELinux: Initializing. Oct 2 19:26:37.204119 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:26:37.204138 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:26:37.204154 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:26:37.204170 kernel: Platform MSI: ITS@0x10080000 domain created Oct 2 19:26:37.204185 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 2 19:26:37.204200 kernel: Remapping and enabling EFI services. Oct 2 19:26:37.204215 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:26:37.204230 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:26:37.204246 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 2 19:26:37.204262 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Oct 2 19:26:37.204281 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 2 19:26:37.204297 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:26:37.204312 kernel: SMP: Total of 2 processors activated. Oct 2 19:26:37.204328 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:26:37.204343 kernel: CPU features: detected: 32-bit EL1 Support Oct 2 19:26:37.204359 kernel: CPU features: detected: CRC32 instructions Oct 2 19:26:37.204374 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:26:37.204390 kernel: alternatives: patching kernel code Oct 2 19:26:37.204405 kernel: devtmpfs: initialized Oct 2 19:26:37.204424 kernel: KASLR disabled due to lack of seed Oct 2 19:26:37.204440 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:26:37.204456 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:26:37.204482 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:26:37.204502 kernel: SMBIOS 3.0.0 present. Oct 2 19:26:37.204518 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 2 19:26:37.204534 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:26:37.204550 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:26:37.204566 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:26:37.204582 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:26:37.204598 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:26:37.204615 kernel: audit: type=2000 audit(0.255:1): state=initialized audit_enabled=0 res=1 Oct 2 19:26:37.204634 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:26:37.204651 kernel: cpuidle: using governor menu Oct 2 19:26:37.204667 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:26:37.204683 kernel: ASID allocator initialised with 32768 entries Oct 2 19:26:37.211744 kernel: ACPI: bus type PCI registered Oct 2 19:26:37.211780 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:26:37.211798 kernel: Serial: AMBA PL011 UART driver Oct 2 19:26:37.211815 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:26:37.211832 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:26:37.211848 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:26:37.211864 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:26:37.211881 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:26:37.211897 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:26:37.211913 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:26:37.211933 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:26:37.211950 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:26:37.211966 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:26:37.211982 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:26:37.211998 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:26:37.212014 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:26:37.212030 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:26:37.212046 kernel: ACPI: Interpreter enabled Oct 2 19:26:37.212062 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:26:37.212082 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:26:37.212099 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 2 19:26:37.212501 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:26:37.212720 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:26:37.213117 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:26:37.213314 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 2 19:26:37.213526 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 2 19:26:37.213559 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 2 19:26:37.213576 kernel: acpiphp: Slot [1] registered Oct 2 19:26:37.213593 kernel: acpiphp: Slot [2] registered Oct 2 19:26:37.213610 kernel: acpiphp: Slot [3] registered Oct 2 19:26:37.213626 kernel: acpiphp: Slot [4] registered Oct 2 19:26:37.213642 kernel: acpiphp: Slot [5] registered Oct 2 19:26:37.213658 kernel: acpiphp: Slot [6] registered Oct 2 19:26:37.213674 kernel: acpiphp: Slot [7] registered Oct 2 19:26:37.213690 kernel: acpiphp: Slot [8] registered Oct 2 19:26:37.213739 kernel: acpiphp: Slot [9] registered Oct 2 19:26:37.213756 kernel: acpiphp: Slot [10] registered Oct 2 19:26:37.213773 kernel: acpiphp: Slot [11] registered Oct 2 19:26:37.213789 kernel: acpiphp: Slot [12] registered Oct 2 19:26:37.213805 kernel: acpiphp: Slot [13] registered Oct 2 19:26:37.213821 kernel: acpiphp: Slot [14] registered Oct 2 19:26:37.213837 kernel: acpiphp: Slot [15] registered Oct 2 19:26:37.213853 kernel: acpiphp: Slot [16] registered Oct 2 19:26:37.213868 kernel: acpiphp: Slot [17] registered Oct 2 19:26:37.213884 kernel: acpiphp: Slot [18] registered Oct 2 19:26:37.213904 kernel: acpiphp: Slot [19] registered Oct 2 19:26:37.213920 kernel: acpiphp: Slot [20] registered Oct 2 19:26:37.213936 kernel: acpiphp: Slot [21] registered Oct 2 19:26:37.213952 kernel: acpiphp: Slot [22] registered Oct 2 19:26:37.213968 kernel: acpiphp: Slot [23] registered Oct 2 19:26:37.213984 kernel: acpiphp: Slot [24] registered Oct 2 19:26:37.214000 kernel: acpiphp: Slot [25] registered Oct 2 19:26:37.214016 kernel: acpiphp: Slot [26] registered Oct 2 19:26:37.214032 kernel: acpiphp: Slot [27] registered Oct 2 19:26:37.214052 kernel: acpiphp: Slot [28] registered Oct 2 19:26:37.214068 kernel: acpiphp: Slot [29] registered Oct 2 19:26:37.214084 kernel: acpiphp: Slot [30] registered Oct 2 19:26:37.214100 kernel: acpiphp: Slot [31] registered Oct 2 19:26:37.214116 kernel: PCI host bridge to bus 0000:00 Oct 2 19:26:37.214312 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 2 19:26:37.214488 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:26:37.214661 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 2 19:26:37.214857 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 2 19:26:37.215080 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 2 19:26:37.215301 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 2 19:26:37.215506 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 2 19:26:37.215744 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 19:26:37.215942 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 2 19:26:37.216148 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:26:37.216360 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 19:26:37.216560 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 2 19:26:37.233390 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 2 19:26:37.235288 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 2 19:26:37.235489 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:26:37.235685 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 2 19:26:37.245630 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 2 19:26:37.245881 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 2 19:26:37.246079 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 2 19:26:37.246280 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 2 19:26:37.246462 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 2 19:26:37.246635 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:26:37.246840 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 2 19:26:37.246873 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:26:37.246891 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:26:37.246908 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:26:37.246925 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:26:37.246942 kernel: iommu: Default domain type: Translated Oct 2 19:26:37.246958 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:26:37.246974 kernel: vgaarb: loaded Oct 2 19:26:37.246990 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:26:37.247007 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:26:37.247027 kernel: PTP clock support registered Oct 2 19:26:37.247044 kernel: Registered efivars operations Oct 2 19:26:37.247060 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:26:37.247076 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:26:37.247092 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:26:37.247109 kernel: pnp: PnP ACPI init Oct 2 19:26:37.247311 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 2 19:26:37.247336 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:26:37.247353 kernel: NET: Registered PF_INET protocol family Oct 2 19:26:37.247375 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:26:37.247392 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:26:37.247409 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:26:37.247425 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:26:37.247442 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:26:37.247458 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:26:37.247474 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:26:37.247491 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:26:37.247507 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:26:37.247527 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:26:37.247544 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 2 19:26:37.247560 kernel: kvm [1]: HYP mode not available Oct 2 19:26:37.247576 kernel: Initialise system trusted keyrings Oct 2 19:26:37.247592 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:26:37.247609 kernel: Key type asymmetric registered Oct 2 19:26:37.247625 kernel: Asymmetric key parser 'x509' registered Oct 2 19:26:37.247641 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:26:37.247657 kernel: io scheduler mq-deadline registered Oct 2 19:26:37.247678 kernel: io scheduler kyber registered Oct 2 19:26:37.247714 kernel: io scheduler bfq registered Oct 2 19:26:37.247922 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 2 19:26:37.247947 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:26:37.247964 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:26:37.247981 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:26:37.247999 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 2 19:26:37.248187 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 2 19:26:37.248215 kernel: printk: console [ttyS0] disabled Oct 2 19:26:37.248232 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 2 19:26:37.248249 kernel: printk: console [ttyS0] enabled Oct 2 19:26:37.248265 kernel: printk: bootconsole [uart0] disabled Oct 2 19:26:37.248281 kernel: thunder_xcv, ver 1.0 Oct 2 19:26:37.248297 kernel: thunder_bgx, ver 1.0 Oct 2 19:26:37.248313 kernel: nicpf, ver 1.0 Oct 2 19:26:37.248329 kernel: nicvf, ver 1.0 Oct 2 19:26:37.248552 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:26:37.250338 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:26:36 UTC (1696274796) Oct 2 19:26:37.250376 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:26:37.250393 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:26:37.250410 kernel: Segment Routing with IPv6 Oct 2 19:26:37.250426 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:26:37.250443 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:26:37.250459 kernel: Key type dns_resolver registered Oct 2 19:26:37.250475 kernel: registered taskstats version 1 Oct 2 19:26:37.250498 kernel: Loading compiled-in X.509 certificates Oct 2 19:26:37.250515 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:26:37.250531 kernel: Key type .fscrypt registered Oct 2 19:26:37.250547 kernel: Key type fscrypt-provisioning registered Oct 2 19:26:37.250563 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:26:37.250579 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:26:37.250596 kernel: ima: No architecture policies found Oct 2 19:26:37.250611 kernel: Freeing unused kernel memory: 34560K Oct 2 19:26:37.250628 kernel: Run /init as init process Oct 2 19:26:37.250648 kernel: with arguments: Oct 2 19:26:37.250664 kernel: /init Oct 2 19:26:37.250680 kernel: with environment: Oct 2 19:26:37.250711 kernel: HOME=/ Oct 2 19:26:37.250732 kernel: TERM=linux Oct 2 19:26:37.250748 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:26:37.250770 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:26:37.250791 systemd[1]: Detected virtualization amazon. Oct 2 19:26:37.250814 systemd[1]: Detected architecture arm64. Oct 2 19:26:37.250831 systemd[1]: Running in initrd. Oct 2 19:26:37.250849 systemd[1]: No hostname configured, using default hostname. Oct 2 19:26:37.250866 systemd[1]: Hostname set to . Oct 2 19:26:37.250884 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:26:37.250902 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:26:37.250919 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:26:37.250937 systemd[1]: Reached target cryptsetup.target. Oct 2 19:26:37.250958 systemd[1]: Reached target paths.target. Oct 2 19:26:37.250975 systemd[1]: Reached target slices.target. Oct 2 19:26:37.250993 systemd[1]: Reached target swap.target. Oct 2 19:26:37.251010 systemd[1]: Reached target timers.target. Oct 2 19:26:37.251028 systemd[1]: Listening on iscsid.socket. Oct 2 19:26:37.251046 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:26:37.251064 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:26:37.251081 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:26:37.251103 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:26:37.251120 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:26:37.251138 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:26:37.251155 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:26:37.251173 systemd[1]: Reached target sockets.target. Oct 2 19:26:37.251191 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:26:37.251208 systemd[1]: Finished network-cleanup.service. Oct 2 19:26:37.251225 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:26:37.251243 systemd[1]: Starting systemd-journald.service... Oct 2 19:26:37.251265 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:26:37.251282 systemd[1]: Starting systemd-resolved.service... Oct 2 19:26:37.251300 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:26:37.251318 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:26:37.251336 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:26:37.251354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:26:37.251371 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:26:37.251389 kernel: audit: type=1130 audit(1696274797.189:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.251411 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:26:37.251428 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:26:37.251446 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:26:37.251464 kernel: audit: type=1130 audit(1696274797.234:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.251480 kernel: Bridge firewalling registered Oct 2 19:26:37.251500 systemd-journald[308]: Journal started Oct 2 19:26:37.251589 systemd-journald[308]: Runtime Journal (/run/log/journal/ec22beba6d55acae28478f042dbd5399) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:26:37.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.262857 systemd[1]: Started systemd-journald.service. Oct 2 19:26:37.262907 kernel: audit: type=1130 audit(1696274797.254:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.173805 systemd-modules-load[309]: Inserted module 'overlay' Oct 2 19:26:37.247120 systemd-modules-load[309]: Inserted module 'br_netfilter' Oct 2 19:26:37.277742 kernel: SCSI subsystem initialized Oct 2 19:26:37.296740 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:26:37.304977 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:26:37.305042 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:26:37.305570 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:26:37.316238 kernel: audit: type=1130 audit(1696274797.306:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.317591 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:26:37.321165 systemd-resolved[310]: Positive Trust Anchors: Oct 2 19:26:37.322235 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:26:37.322295 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:26:37.347029 systemd-modules-load[309]: Inserted module 'dm_multipath' Oct 2 19:26:37.350890 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:26:37.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.361669 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:26:37.372256 kernel: audit: type=1130 audit(1696274797.353:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.404922 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:26:37.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.416789 kernel: audit: type=1130 audit(1696274797.407:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.418780 dracut-cmdline[325]: dracut-dracut-053 Oct 2 19:26:37.431244 dracut-cmdline[325]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:26:37.674731 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:26:37.685736 kernel: iscsi: registered transport (tcp) Oct 2 19:26:37.713377 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:26:37.713446 kernel: QLogic iSCSI HBA Driver Oct 2 19:26:37.872750 kernel: random: crng init done Oct 2 19:26:37.872854 systemd-resolved[310]: Defaulting to hostname 'linux'. Oct 2 19:26:37.876729 systemd[1]: Started systemd-resolved.service. Oct 2 19:26:37.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.887211 systemd[1]: Reached target nss-lookup.target. Oct 2 19:26:37.890770 kernel: audit: type=1130 audit(1696274797.877:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.943833 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:26:37.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.948750 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:26:37.958729 kernel: audit: type=1130 audit(1696274797.945:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.047756 kernel: raid6: neonx8 gen() 6372 MB/s Oct 2 19:26:38.065743 kernel: raid6: neonx8 xor() 4717 MB/s Oct 2 19:26:38.083750 kernel: raid6: neonx4 gen() 6530 MB/s Oct 2 19:26:38.101743 kernel: raid6: neonx4 xor() 4896 MB/s Oct 2 19:26:38.119746 kernel: raid6: neonx2 gen() 5783 MB/s Oct 2 19:26:38.137744 kernel: raid6: neonx2 xor() 4531 MB/s Oct 2 19:26:38.155745 kernel: raid6: neonx1 gen() 4479 MB/s Oct 2 19:26:38.173741 kernel: raid6: neonx1 xor() 3679 MB/s Oct 2 19:26:38.191744 kernel: raid6: int64x8 gen() 3423 MB/s Oct 2 19:26:38.209744 kernel: raid6: int64x8 xor() 2085 MB/s Oct 2 19:26:38.227747 kernel: raid6: int64x4 gen() 3832 MB/s Oct 2 19:26:38.245744 kernel: raid6: int64x4 xor() 2194 MB/s Oct 2 19:26:38.263746 kernel: raid6: int64x2 gen() 3604 MB/s Oct 2 19:26:38.281746 kernel: raid6: int64x2 xor() 1948 MB/s Oct 2 19:26:38.299742 kernel: raid6: int64x1 gen() 2762 MB/s Oct 2 19:26:38.319348 kernel: raid6: int64x1 xor() 1451 MB/s Oct 2 19:26:38.319412 kernel: raid6: using algorithm neonx4 gen() 6530 MB/s Oct 2 19:26:38.319436 kernel: raid6: .... xor() 4896 MB/s, rmw enabled Oct 2 19:26:38.321235 kernel: raid6: using neon recovery algorithm Oct 2 19:26:38.340758 kernel: xor: measuring software checksum speed Oct 2 19:26:38.342735 kernel: 8regs : 9400 MB/sec Oct 2 19:26:38.345737 kernel: 32regs : 11160 MB/sec Oct 2 19:26:38.349874 kernel: arm64_neon : 9627 MB/sec Oct 2 19:26:38.349940 kernel: xor: using function: 32regs (11160 MB/sec) Oct 2 19:26:38.441750 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:26:38.483428 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:26:38.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.488923 systemd[1]: Starting systemd-udevd.service... Oct 2 19:26:38.486000 audit: BPF prog-id=7 op=LOAD Oct 2 19:26:38.486000 audit: BPF prog-id=8 op=LOAD Oct 2 19:26:38.497100 kernel: audit: type=1130 audit(1696274798.483:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.529592 systemd-udevd[508]: Using default interface naming scheme 'v252'. Oct 2 19:26:38.539979 systemd[1]: Started systemd-udevd.service. Oct 2 19:26:38.556379 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:26:38.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.621056 dracut-pre-trigger[529]: rd.md=0: removing MD RAID activation Oct 2 19:26:38.735072 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:26:38.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.739722 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:26:38.857170 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:26:38.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:39.003737 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:26:39.003811 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 2 19:26:39.021750 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 2 19:26:39.026762 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 19:26:39.027110 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 19:26:39.029724 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 19:26:39.037724 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:a6:2b:b0:a1:53 Oct 2 19:26:39.038049 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 19:26:39.046732 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:26:39.046806 kernel: GPT:9289727 != 16777215 Oct 2 19:26:39.046842 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:26:39.050461 kernel: GPT:9289727 != 16777215 Oct 2 19:26:39.050523 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:26:39.054136 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:39.058952 (udev-worker)[569]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:26:39.134742 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (559) Oct 2 19:26:39.208825 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:26:39.336304 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:26:39.346453 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:26:39.369036 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:26:39.383461 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:26:39.406680 systemd[1]: Starting disk-uuid.service... Oct 2 19:26:39.435734 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:39.436886 disk-uuid[674]: Primary Header is updated. Oct 2 19:26:39.436886 disk-uuid[674]: Secondary Entries is updated. Oct 2 19:26:39.436886 disk-uuid[674]: Secondary Header is updated. Oct 2 19:26:39.458731 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:39.468722 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:40.468735 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:40.468959 disk-uuid[675]: The operation has completed successfully. Oct 2 19:26:40.757816 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:26:40.758366 systemd[1]: Finished disk-uuid.service. Oct 2 19:26:40.772190 kernel: kauditd_printk_skb: 5 callbacks suppressed Oct 2 19:26:40.772224 kernel: audit: type=1130 audit(1696274800.760:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:40.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:40.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:40.779354 kernel: audit: type=1131 audit(1696274800.770:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:40.773553 systemd[1]: Starting verity-setup.service... Oct 2 19:26:40.822735 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:26:40.912725 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:26:40.918549 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:26:40.932559 systemd[1]: Finished verity-setup.service. Oct 2 19:26:40.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:40.943775 kernel: audit: type=1130 audit(1696274800.933:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.024726 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:26:41.026093 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:26:41.029377 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:26:41.033510 systemd[1]: Starting ignition-setup.service... Oct 2 19:26:41.045471 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:26:41.074162 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:26:41.074233 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:26:41.074258 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:26:41.092785 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:26:41.133173 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:26:41.163575 systemd[1]: Finished ignition-setup.service. Oct 2 19:26:41.175923 kernel: audit: type=1130 audit(1696274801.163:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.168293 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:26:41.422806 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:26:41.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.437691 kernel: audit: type=1130 audit(1696274801.423:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.435537 systemd[1]: Starting systemd-networkd.service... Oct 2 19:26:41.432000 audit: BPF prog-id=9 op=LOAD Oct 2 19:26:41.443735 kernel: audit: type=1334 audit(1696274801.432:21): prog-id=9 op=LOAD Oct 2 19:26:41.502582 systemd-networkd[1187]: lo: Link UP Oct 2 19:26:41.504416 systemd-networkd[1187]: lo: Gained carrier Oct 2 19:26:41.507688 systemd-networkd[1187]: Enumeration completed Oct 2 19:26:41.509602 systemd[1]: Started systemd-networkd.service. Oct 2 19:26:41.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.511588 systemd[1]: Reached target network.target. Oct 2 19:26:41.524541 systemd[1]: Starting iscsiuio.service... Oct 2 19:26:41.528971 systemd-networkd[1187]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:26:41.534745 kernel: audit: type=1130 audit(1696274801.510:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.536890 systemd-networkd[1187]: eth0: Link UP Oct 2 19:26:41.537161 systemd-networkd[1187]: eth0: Gained carrier Oct 2 19:26:41.550353 systemd[1]: Started iscsiuio.service. Oct 2 19:26:41.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.553513 systemd[1]: Starting iscsid.service... Oct 2 19:26:41.569741 kernel: audit: type=1130 audit(1696274801.550:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.570581 iscsid[1196]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:26:41.570581 iscsid[1196]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:26:41.570581 iscsid[1196]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:26:41.570581 iscsid[1196]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:26:41.587999 iscsid[1196]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:26:41.587999 iscsid[1196]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:26:41.602870 systemd[1]: Started iscsid.service. Oct 2 19:26:41.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.607930 systemd-networkd[1187]: eth0: DHCPv4 address 172.31.20.240/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:26:41.626348 kernel: audit: type=1130 audit(1696274801.603:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.616836 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:26:41.667072 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:26:41.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.670565 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:26:41.681392 kernel: audit: type=1130 audit(1696274801.668:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.679633 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:26:41.688858 systemd[1]: Reached target remote-fs.target. Oct 2 19:26:41.693842 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:26:41.730788 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:26:41.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.775333 ignition[1101]: Ignition 2.14.0 Oct 2 19:26:41.775363 ignition[1101]: Stage: fetch-offline Oct 2 19:26:41.775942 ignition[1101]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:41.776517 ignition[1101]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:41.798849 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:41.801565 ignition[1101]: Ignition finished successfully Oct 2 19:26:41.804881 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:26:41.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.817987 systemd[1]: Starting ignition-fetch.service... Oct 2 19:26:41.848854 ignition[1211]: Ignition 2.14.0 Oct 2 19:26:41.848883 ignition[1211]: Stage: fetch Oct 2 19:26:41.849255 ignition[1211]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:41.849314 ignition[1211]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:41.866648 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:41.869093 ignition[1211]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:41.877663 ignition[1211]: INFO : PUT result: OK Oct 2 19:26:41.880644 ignition[1211]: DEBUG : parsed url from cmdline: "" Oct 2 19:26:41.880644 ignition[1211]: INFO : no config URL provided Oct 2 19:26:41.880644 ignition[1211]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:26:41.880644 ignition[1211]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 19:26:41.880644 ignition[1211]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:41.891348 ignition[1211]: INFO : PUT result: OK Oct 2 19:26:41.891348 ignition[1211]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 19:26:41.891348 ignition[1211]: INFO : GET result: OK Oct 2 19:26:41.896658 ignition[1211]: DEBUG : parsing config with SHA512: ef8576792ad5ef3b5ffd9dd1b2bb3aaca6693eba1d27bb80bf884e1df2f328c568a95dbd1693be20c5caa49962b69d858a8d6e1192906ace37b958a80e9ecd79 Oct 2 19:26:41.925775 unknown[1211]: fetched base config from "system" Oct 2 19:26:41.926051 unknown[1211]: fetched base config from "system" Oct 2 19:26:41.929343 ignition[1211]: fetch: fetch complete Oct 2 19:26:41.926068 unknown[1211]: fetched user config from "aws" Oct 2 19:26:41.929374 ignition[1211]: fetch: fetch passed Oct 2 19:26:41.933750 ignition[1211]: Ignition finished successfully Oct 2 19:26:41.940659 systemd[1]: Finished ignition-fetch.service. Oct 2 19:26:41.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:41.944373 systemd[1]: Starting ignition-kargs.service... Oct 2 19:26:41.979748 ignition[1217]: Ignition 2.14.0 Oct 2 19:26:41.979775 ignition[1217]: Stage: kargs Oct 2 19:26:41.980156 ignition[1217]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:41.980216 ignition[1217]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:41.996656 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:41.999045 ignition[1217]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:42.002909 ignition[1217]: INFO : PUT result: OK Oct 2 19:26:42.007470 ignition[1217]: kargs: kargs passed Oct 2 19:26:42.007591 ignition[1217]: Ignition finished successfully Oct 2 19:26:42.012015 systemd[1]: Finished ignition-kargs.service. Oct 2 19:26:42.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.016139 systemd[1]: Starting ignition-disks.service... Oct 2 19:26:42.046538 ignition[1223]: Ignition 2.14.0 Oct 2 19:26:42.046567 ignition[1223]: Stage: disks Oct 2 19:26:42.046967 ignition[1223]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:42.047028 ignition[1223]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:42.063204 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:42.065560 ignition[1223]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:42.069270 ignition[1223]: INFO : PUT result: OK Oct 2 19:26:42.073754 ignition[1223]: disks: disks passed Oct 2 19:26:42.073844 ignition[1223]: Ignition finished successfully Oct 2 19:26:42.077560 systemd[1]: Finished ignition-disks.service. Oct 2 19:26:42.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.079391 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:26:42.082646 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:26:42.084320 systemd[1]: Reached target local-fs.target. Oct 2 19:26:42.087392 systemd[1]: Reached target sysinit.target. Oct 2 19:26:42.090341 systemd[1]: Reached target basic.target. Oct 2 19:26:42.094534 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:26:42.152463 systemd-fsck[1231]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:26:42.163414 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:26:42.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.167892 systemd[1]: Mounting sysroot.mount... Oct 2 19:26:42.200750 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:26:42.200982 systemd[1]: Mounted sysroot.mount. Oct 2 19:26:42.203000 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:26:42.221456 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:26:42.225358 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:26:42.228457 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:26:42.228818 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:26:42.253028 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:26:42.267451 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:26:42.272374 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:26:42.295736 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1248) Oct 2 19:26:42.302100 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:26:42.302170 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:26:42.305683 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:26:42.312748 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:26:42.313932 initrd-setup-root[1253]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:26:42.321031 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:26:42.356048 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:26:42.376510 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:26:42.396885 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:26:42.564883 systemd-networkd[1187]: eth0: Gained IPv6LL Oct 2 19:26:42.603493 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:26:42.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.608299 systemd[1]: Starting ignition-mount.service... Oct 2 19:26:42.611342 systemd[1]: Starting sysroot-boot.service... Oct 2 19:26:42.644553 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:26:42.644768 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:26:42.671680 systemd[1]: Finished sysroot-boot.service. Oct 2 19:26:42.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.691202 ignition[1315]: INFO : Ignition 2.14.0 Oct 2 19:26:42.694931 ignition[1315]: INFO : Stage: mount Oct 2 19:26:42.694931 ignition[1315]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:42.694931 ignition[1315]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:42.711390 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:42.713969 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:42.717663 ignition[1315]: INFO : PUT result: OK Oct 2 19:26:42.722819 ignition[1315]: INFO : mount: mount passed Oct 2 19:26:42.724924 ignition[1315]: INFO : Ignition finished successfully Oct 2 19:26:42.728097 systemd[1]: Finished ignition-mount.service. Oct 2 19:26:42.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.732739 systemd[1]: Starting ignition-files.service... Oct 2 19:26:42.756614 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:26:42.780915 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1323) Oct 2 19:26:42.786900 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:26:42.786963 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:26:42.786987 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:26:42.796724 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:26:42.801514 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:26:42.835840 ignition[1342]: INFO : Ignition 2.14.0 Oct 2 19:26:42.835840 ignition[1342]: INFO : Stage: files Oct 2 19:26:42.839265 ignition[1342]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:42.839265 ignition[1342]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:42.857741 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:42.860275 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:42.864082 ignition[1342]: INFO : PUT result: OK Oct 2 19:26:42.868661 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:26:42.873000 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:26:42.873000 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:26:42.920744 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:26:42.923671 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:26:42.927535 unknown[1342]: wrote ssh authorized keys file for user: core Oct 2 19:26:42.929887 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:26:42.933312 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Oct 2 19:26:42.937250 ignition[1342]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Oct 2 19:26:43.104922 ignition[1342]: INFO : GET result: OK Oct 2 19:26:43.717322 ignition[1342]: DEBUG : file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Oct 2 19:26:43.721986 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Oct 2 19:26:43.721986 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Oct 2 19:26:43.721986 ignition[1342]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Oct 2 19:26:43.809526 ignition[1342]: INFO : GET result: OK Oct 2 19:26:44.107396 ignition[1342]: DEBUG : file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Oct 2 19:26:44.112504 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Oct 2 19:26:44.112504 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:26:44.112504 ignition[1342]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:44.130576 ignition[1342]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2282065032" Oct 2 19:26:44.137446 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1347) Oct 2 19:26:44.137516 ignition[1342]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2282065032": device or resource busy Oct 2 19:26:44.137516 ignition[1342]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2282065032", trying btrfs: device or resource busy Oct 2 19:26:44.137516 ignition[1342]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2282065032" Oct 2 19:26:44.153922 ignition[1342]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2282065032" Oct 2 19:26:44.159330 ignition[1342]: INFO : op(3): [started] unmounting "/mnt/oem2282065032" Oct 2 19:26:44.162989 ignition[1342]: INFO : op(3): [finished] unmounting "/mnt/oem2282065032" Oct 2 19:26:44.162989 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:26:44.169610 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:26:44.169610 ignition[1342]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:26:44.169734 systemd[1]: mnt-oem2282065032.mount: Deactivated successfully. Oct 2 19:26:44.291578 ignition[1342]: INFO : GET result: OK Oct 2 19:26:45.836826 ignition[1342]: DEBUG : file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Oct 2 19:26:45.841832 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:26:45.841832 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:26:45.841832 ignition[1342]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:26:45.897272 ignition[1342]: INFO : GET result: OK Oct 2 19:26:47.400290 ignition[1342]: DEBUG : file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Oct 2 19:26:47.405349 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:26:47.405349 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:26:47.405349 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:26:47.405349 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:26:47.419127 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:26:47.419127 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:26:47.426614 ignition[1342]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:47.438678 ignition[1342]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2005299912" Oct 2 19:26:47.438678 ignition[1342]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2005299912": device or resource busy Oct 2 19:26:47.438678 ignition[1342]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2005299912", trying btrfs: device or resource busy Oct 2 19:26:47.438678 ignition[1342]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2005299912" Oct 2 19:26:47.438678 ignition[1342]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2005299912" Oct 2 19:26:47.460730 ignition[1342]: INFO : op(6): [started] unmounting "/mnt/oem2005299912" Oct 2 19:26:47.464807 ignition[1342]: INFO : op(6): [finished] unmounting "/mnt/oem2005299912" Oct 2 19:26:47.464807 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:26:47.464807 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:26:47.464807 ignition[1342]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:47.478759 systemd[1]: mnt-oem2005299912.mount: Deactivated successfully. Oct 2 19:26:47.495497 ignition[1342]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem503209945" Oct 2 19:26:47.498430 ignition[1342]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem503209945": device or resource busy Oct 2 19:26:47.498430 ignition[1342]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem503209945", trying btrfs: device or resource busy Oct 2 19:26:47.498430 ignition[1342]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem503209945" Oct 2 19:26:47.508074 ignition[1342]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem503209945" Oct 2 19:26:47.508074 ignition[1342]: INFO : op(9): [started] unmounting "/mnt/oem503209945" Oct 2 19:26:47.508074 ignition[1342]: INFO : op(9): [finished] unmounting "/mnt/oem503209945" Oct 2 19:26:47.508074 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:26:47.508074 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:26:47.508074 ignition[1342]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:47.543469 ignition[1342]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836860093" Oct 2 19:26:47.543469 ignition[1342]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836860093": device or resource busy Oct 2 19:26:47.543469 ignition[1342]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3836860093", trying btrfs: device or resource busy Oct 2 19:26:47.543469 ignition[1342]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836860093" Oct 2 19:26:47.543469 ignition[1342]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836860093" Oct 2 19:26:47.543469 ignition[1342]: INFO : op(c): [started] unmounting "/mnt/oem3836860093" Oct 2 19:26:47.561220 ignition[1342]: INFO : op(c): [finished] unmounting "/mnt/oem3836860093" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(d): [started] processing unit "amazon-ssm-agent.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(d): op(e): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(d): op(e): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(d): [finished] processing unit "amazon-ssm-agent.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(f): [started] processing unit "nvidia.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(f): [finished] processing unit "nvidia.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(15): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:26:47.561220 ignition[1342]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:26:47.622509 ignition[1342]: INFO : files: op(16): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:26:47.622509 ignition[1342]: INFO : files: op(16): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:26:47.622509 ignition[1342]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Oct 2 19:26:47.622509 ignition[1342]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:26:47.622509 ignition[1342]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:26:47.622509 ignition[1342]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:26:47.622509 ignition[1342]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:26:47.622509 ignition[1342]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:26:47.669537 ignition[1342]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:26:47.673546 ignition[1342]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:26:47.673546 ignition[1342]: INFO : files: files passed Oct 2 19:26:47.673546 ignition[1342]: INFO : Ignition finished successfully Oct 2 19:26:47.682618 systemd[1]: Finished ignition-files.service. Oct 2 19:26:47.697490 kernel: kauditd_printk_skb: 9 callbacks suppressed Oct 2 19:26:47.697555 kernel: audit: type=1130 audit(1696274807.685:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.698341 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:26:47.702520 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:26:47.705351 systemd[1]: Starting ignition-quench.service... Oct 2 19:26:47.723586 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:26:47.725916 systemd[1]: Finished ignition-quench.service. Oct 2 19:26:47.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.744384 kernel: audit: type=1130 audit(1696274807.727:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.744461 kernel: audit: type=1131 audit(1696274807.727:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.756839 initrd-setup-root-after-ignition[1367]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:26:47.762341 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:26:47.766455 systemd[1]: Reached target ignition-complete.target. Oct 2 19:26:47.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.769619 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:26:47.793715 kernel: audit: type=1130 audit(1696274807.764:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.824097 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:26:47.825414 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:26:47.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.830805 systemd[1]: Reached target initrd-fs.target. Oct 2 19:26:47.858851 kernel: audit: type=1130 audit(1696274807.828:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.858893 kernel: audit: type=1131 audit(1696274807.828:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.845356 systemd[1]: Reached target initrd.target. Oct 2 19:26:47.846249 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:26:47.847751 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:26:47.889939 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:26:47.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.896316 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:26:47.905945 kernel: audit: type=1130 audit(1696274807.893:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.925768 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:26:47.929391 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:26:47.945880 systemd[1]: Stopped target timers.target. Oct 2 19:26:47.949197 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:26:47.951405 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:26:47.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.962437 systemd[1]: Stopped target initrd.target. Oct 2 19:26:47.965377 kernel: audit: type=1131 audit(1696274807.953:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.965687 systemd[1]: Stopped target basic.target. Oct 2 19:26:47.968852 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:26:47.972459 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:26:47.976079 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:26:47.979787 systemd[1]: Stopped target remote-fs.target. Oct 2 19:26:47.983051 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:26:47.986530 systemd[1]: Stopped target sysinit.target. Oct 2 19:26:47.989679 systemd[1]: Stopped target local-fs.target. Oct 2 19:26:47.992923 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:26:47.996313 systemd[1]: Stopped target swap.target. Oct 2 19:26:47.999313 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:26:48.001548 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:26:48.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.005032 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:26:48.015128 kernel: audit: type=1131 audit(1696274808.003:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.015275 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:26:48.015838 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:26:48.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.023329 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:26:48.024371 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:26:48.034769 kernel: audit: type=1131 audit(1696274808.022:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.037301 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:26:48.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.037566 systemd[1]: Stopped ignition-files.service. Oct 2 19:26:48.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.046295 systemd[1]: Stopping ignition-mount.service... Oct 2 19:26:48.051440 systemd[1]: Stopping iscsid.service... Oct 2 19:26:48.053008 iscsid[1196]: iscsid shutting down. Oct 2 19:26:48.057766 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:26:48.061122 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:26:48.063291 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:26:48.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.073511 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:26:48.074335 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:26:48.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.087831 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:26:48.089326 systemd[1]: Stopped iscsid.service. Oct 2 19:26:48.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.097954 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:26:48.099426 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:26:48.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.105548 systemd[1]: Stopping iscsiuio.service... Oct 2 19:26:48.116362 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:26:48.118487 systemd[1]: Stopped iscsiuio.service. Oct 2 19:26:48.127447 ignition[1381]: INFO : Ignition 2.14.0 Oct 2 19:26:48.129251 ignition[1381]: INFO : Stage: umount Oct 2 19:26:48.129251 ignition[1381]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:48.129251 ignition[1381]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:48.147892 ignition[1381]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:48.150366 ignition[1381]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:48.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.153032 ignition[1381]: INFO : PUT result: OK Oct 2 19:26:48.158724 ignition[1381]: INFO : umount: umount passed Oct 2 19:26:48.160678 ignition[1381]: INFO : Ignition finished successfully Oct 2 19:26:48.164327 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:26:48.166331 systemd[1]: Stopped ignition-mount.service. Oct 2 19:26:48.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.168227 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:26:48.168326 systemd[1]: Stopped ignition-disks.service. Oct 2 19:26:48.170088 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:26:48.170175 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:26:48.171997 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:26:48.172090 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:26:48.173963 systemd[1]: Stopped target network.target. Oct 2 19:26:48.175602 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:26:48.175722 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:26:48.177616 systemd[1]: Stopped target paths.target. Oct 2 19:26:48.195080 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:26:48.198553 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:26:48.223273 systemd[1]: Stopped target slices.target. Oct 2 19:26:48.226317 systemd[1]: Stopped target sockets.target. Oct 2 19:26:48.229342 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:26:48.229456 systemd[1]: Closed iscsid.socket. Oct 2 19:26:48.233570 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:26:48.233667 systemd[1]: Closed iscsiuio.socket. Oct 2 19:26:48.236819 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:26:48.238131 systemd[1]: Stopped ignition-setup.service. Oct 2 19:26:48.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.243359 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:26:48.246766 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:26:48.249762 systemd-networkd[1187]: eth0: DHCPv6 lease lost Oct 2 19:26:48.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.250617 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:26:48.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.250835 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:26:48.279000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:26:48.259320 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:26:48.259531 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:26:48.264823 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:26:48.264896 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:26:48.266539 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:26:48.266628 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:26:48.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.269528 systemd[1]: Stopping network-cleanup.service... Oct 2 19:26:48.271019 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:26:48.271158 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:26:48.276531 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:26:48.276630 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:26:48.283348 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:26:48.283454 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:26:48.298255 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:26:48.310303 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:26:48.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.310837 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:26:48.327587 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:26:48.328162 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:26:48.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.335477 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:26:48.335973 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:26:48.337000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:26:48.341469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:26:48.341750 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:26:48.346912 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:26:48.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.347027 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:26:48.351403 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:26:48.351500 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:26:48.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.362738 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:26:48.366213 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:26:48.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.370952 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:26:48.372972 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:26:48.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.373120 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:26:48.376982 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:26:48.377081 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:26:48.380457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:26:48.380551 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:26:48.383282 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:26:48.383510 systemd[1]: Stopped network-cleanup.service. Oct 2 19:26:48.419000 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:26:48.419210 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:26:48.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:48.425672 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:26:48.430504 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:26:48.440177 systemd[1]: mnt-oem3836860093.mount: Deactivated successfully. Oct 2 19:26:48.442441 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:26:48.444506 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:26:48.446047 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:26:48.464801 systemd[1]: Switching root. Oct 2 19:26:48.493965 systemd-journald[308]: Journal stopped Oct 2 19:26:54.135457 systemd-journald[308]: Received SIGTERM from PID 1 (systemd). Oct 2 19:26:54.136116 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:26:54.136280 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:26:54.136322 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:26:54.136355 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:26:54.136391 kernel: SELinux: policy capability open_perms=1 Oct 2 19:26:54.136424 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:26:54.136454 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:26:54.136489 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:26:54.136521 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:26:54.136552 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:26:54.136584 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:26:54.136616 systemd[1]: Successfully loaded SELinux policy in 86.745ms. Oct 2 19:26:54.136730 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.273ms. Oct 2 19:26:54.136779 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:26:54.136815 systemd[1]: Detected virtualization amazon. Oct 2 19:26:54.136845 systemd[1]: Detected architecture arm64. Oct 2 19:26:54.136876 systemd[1]: Detected first boot. Oct 2 19:26:54.136906 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:26:54.136938 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:26:54.136971 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:26:54.137009 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:26:54.137044 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:26:54.137141 kernel: kauditd_printk_skb: 40 callbacks suppressed Oct 2 19:26:54.137179 kernel: audit: type=1334 audit(1696274813.576:85): prog-id=12 op=LOAD Oct 2 19:26:54.137208 kernel: audit: type=1334 audit(1696274813.576:86): prog-id=3 op=UNLOAD Oct 2 19:26:54.137238 kernel: audit: type=1334 audit(1696274813.578:87): prog-id=13 op=LOAD Oct 2 19:26:54.137268 kernel: audit: type=1334 audit(1696274813.581:88): prog-id=14 op=LOAD Oct 2 19:26:54.137295 kernel: audit: type=1334 audit(1696274813.581:89): prog-id=4 op=UNLOAD Oct 2 19:26:54.137332 kernel: audit: type=1334 audit(1696274813.581:90): prog-id=5 op=UNLOAD Oct 2 19:26:54.137381 kernel: audit: type=1334 audit(1696274813.583:91): prog-id=15 op=LOAD Oct 2 19:26:54.137416 kernel: audit: type=1334 audit(1696274813.583:92): prog-id=12 op=UNLOAD Oct 2 19:26:54.137447 kernel: audit: type=1334 audit(1696274813.586:93): prog-id=16 op=LOAD Oct 2 19:26:54.137478 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:26:54.137509 kernel: audit: type=1334 audit(1696274813.588:94): prog-id=17 op=LOAD Oct 2 19:26:54.137539 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:26:54.137569 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:26:54.137668 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:26:54.137728 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:26:54.137764 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:26:54.137797 systemd[1]: Created slice system-getty.slice. Oct 2 19:26:54.137829 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:26:54.137861 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:26:54.137895 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:26:54.137927 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:26:54.137962 systemd[1]: Created slice user.slice. Oct 2 19:26:54.137992 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:26:54.138024 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:26:54.138055 systemd[1]: Set up automount boot.automount. Oct 2 19:26:54.138086 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:26:54.138125 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:26:54.138158 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:26:54.138188 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:26:54.138220 systemd[1]: Reached target integritysetup.target. Oct 2 19:26:54.138251 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:26:54.138289 systemd[1]: Reached target remote-fs.target. Oct 2 19:26:54.138319 systemd[1]: Reached target slices.target. Oct 2 19:26:54.138349 systemd[1]: Reached target swap.target. Oct 2 19:26:54.138378 systemd[1]: Reached target torcx.target. Oct 2 19:26:54.138408 systemd[1]: Reached target veritysetup.target. Oct 2 19:26:54.138440 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:26:54.138470 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:26:54.138499 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:26:54.138532 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:26:54.138567 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:26:54.138597 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:26:54.138627 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:26:54.138657 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:26:54.138688 systemd[1]: Mounting media.mount... Oct 2 19:26:54.142834 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:26:54.142872 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:26:54.142903 systemd[1]: Mounting tmp.mount... Oct 2 19:26:54.142934 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:26:54.142969 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:26:54.143008 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:26:54.143039 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:26:54.143072 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:26:54.143106 systemd[1]: Starting modprobe@drm.service... Oct 2 19:26:54.143140 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:26:54.143175 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:26:54.143207 systemd[1]: Starting modprobe@loop.service... Oct 2 19:26:54.143248 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:26:54.143285 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:26:54.143317 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:26:54.143421 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:26:54.143456 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:26:54.143486 systemd[1]: Stopped systemd-journald.service. Oct 2 19:26:54.143517 systemd[1]: Starting systemd-journald.service... Oct 2 19:26:54.143547 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:26:54.143581 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:26:54.143611 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:26:54.143648 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:26:54.143681 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:26:54.145850 systemd[1]: Stopped verity-setup.service. Oct 2 19:26:54.145896 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:26:54.145926 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:26:54.145956 systemd[1]: Mounted media.mount. Oct 2 19:26:54.145987 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:26:54.146016 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:26:54.146049 systemd[1]: Mounted tmp.mount. Oct 2 19:26:54.146081 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:26:54.146118 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:26:54.146151 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:26:54.146181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:26:54.146213 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:26:54.161450 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:26:54.161501 systemd[1]: Finished modprobe@drm.service. Oct 2 19:26:54.161533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:26:54.161566 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:26:54.161599 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:26:54.161631 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:26:54.163809 systemd[1]: Reached target network-pre.target. Oct 2 19:26:54.163862 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:26:54.163893 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:26:54.163925 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:26:54.163965 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:26:54.163996 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:26:54.164029 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:26:54.164093 kernel: fuse: init (API version 7.34) Oct 2 19:26:54.164139 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:26:54.164186 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:26:54.164220 kernel: loop: module loaded Oct 2 19:26:54.164254 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:26:54.164286 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:26:54.164322 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:26:54.164353 systemd[1]: Finished modprobe@loop.service. Oct 2 19:26:54.164394 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:26:54.164427 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:26:54.164461 systemd-journald[1494]: Journal started Oct 2 19:26:54.164644 systemd-journald[1494]: Runtime Journal (/run/log/journal/ec22beba6d55acae28478f042dbd5399) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:26:49.129000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:26:49.282000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:26:49.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:26:49.282000 audit: BPF prog-id=10 op=LOAD Oct 2 19:26:49.282000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:26:49.282000 audit: BPF prog-id=11 op=LOAD Oct 2 19:26:49.282000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:26:53.576000 audit: BPF prog-id=12 op=LOAD Oct 2 19:26:53.576000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:26:53.578000 audit: BPF prog-id=13 op=LOAD Oct 2 19:26:53.581000 audit: BPF prog-id=14 op=LOAD Oct 2 19:26:53.581000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:26:53.581000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:26:53.583000 audit: BPF prog-id=15 op=LOAD Oct 2 19:26:53.583000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:26:53.586000 audit: BPF prog-id=16 op=LOAD Oct 2 19:26:53.588000 audit: BPF prog-id=17 op=LOAD Oct 2 19:26:53.589000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:26:53.589000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:26:53.594000 audit: BPF prog-id=18 op=LOAD Oct 2 19:26:53.594000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:26:53.596000 audit: BPF prog-id=19 op=LOAD Oct 2 19:26:53.599000 audit: BPF prog-id=20 op=LOAD Oct 2 19:26:53.599000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:26:53.599000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:26:53.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:53.612000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:26:53.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:53.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:53.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:53.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:53.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:53.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:53.906000 audit: BPF prog-id=21 op=LOAD Oct 2 19:26:53.906000 audit: BPF prog-id=22 op=LOAD Oct 2 19:26:53.906000 audit: BPF prog-id=23 op=LOAD Oct 2 19:26:53.906000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:26:53.906000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:26:53.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:53.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:53.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:53.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.125000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:26:54.125000 audit[1494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd3cfb0b0 a2=4000 a3=1 items=0 ppid=1 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:54.178242 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:26:54.178315 systemd[1]: Started systemd-journald.service. Oct 2 19:26:54.125000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:26:54.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:49.486615 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:26:53.575717 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:26:49.488431 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:26:53.602276 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:26:49.488482 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:26:54.178361 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:26:49.488549 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:26:49.488575 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:26:54.181257 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:26:49.488640 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:26:49.488671 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:26:49.489092 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:26:49.489180 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:26:49.489216 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:26:54.189195 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:26:49.490269 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:26:49.490350 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:26:49.490396 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:26:49.490436 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:26:49.490482 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:26:49.490520 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:26:52.706965 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:52Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:52.707533 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:52Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:52.707830 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:52Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:52.708275 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:52Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:52.708390 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:52Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:26:52.708533 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2023-10-02T19:26:52Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:26:54.230167 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:26:54.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.239274 systemd-journald[1494]: Time spent on flushing to /var/log/journal/ec22beba6d55acae28478f042dbd5399 is 76.006ms for 1155 entries. Oct 2 19:26:54.239274 systemd-journald[1494]: System Journal (/var/log/journal/ec22beba6d55acae28478f042dbd5399) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:26:54.352906 systemd-journald[1494]: Received client request to flush runtime journal. Oct 2 19:26:54.355584 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:26:54.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.392906 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:26:54.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.397188 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:26:54.417461 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:26:54.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.421779 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:26:54.438072 udevadm[1532]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:26:54.551064 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:26:54.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:54.555552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:26:54.693161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:26:54.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:55.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:55.153000 audit: BPF prog-id=24 op=LOAD Oct 2 19:26:55.153000 audit: BPF prog-id=25 op=LOAD Oct 2 19:26:55.153000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:26:55.153000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:26:55.151523 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:26:55.156551 systemd[1]: Starting systemd-udevd.service... Oct 2 19:26:55.205564 systemd-udevd[1537]: Using default interface naming scheme 'v252'. Oct 2 19:26:55.248995 systemd[1]: Started systemd-udevd.service. Oct 2 19:26:55.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:55.253000 audit: BPF prog-id=26 op=LOAD Oct 2 19:26:55.258455 systemd[1]: Starting systemd-networkd.service... Oct 2 19:26:55.266000 audit: BPF prog-id=27 op=LOAD Oct 2 19:26:55.267000 audit: BPF prog-id=28 op=LOAD Oct 2 19:26:55.267000 audit: BPF prog-id=29 op=LOAD Oct 2 19:26:55.270417 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:26:55.399242 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:26:55.408428 systemd[1]: Started systemd-userdbd.service. Oct 2 19:26:55.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:55.423851 (udev-worker)[1538]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:26:55.618074 systemd-networkd[1545]: lo: Link UP Oct 2 19:26:55.619825 systemd-networkd[1545]: lo: Gained carrier Oct 2 19:26:55.621058 systemd-networkd[1545]: Enumeration completed Oct 2 19:26:55.621480 systemd[1]: Started systemd-networkd.service. Oct 2 19:26:55.622514 systemd-networkd[1545]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:26:55.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:55.625800 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:26:55.632733 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:26:55.634062 systemd-networkd[1545]: eth0: Link UP Oct 2 19:26:55.634610 systemd-networkd[1545]: eth0: Gained carrier Oct 2 19:26:55.653100 systemd-networkd[1545]: eth0: DHCPv4 address 172.31.20.240/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:26:55.738822 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1552) Oct 2 19:26:55.969001 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:26:55.972625 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:26:55.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:55.977494 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:26:56.029848 lvm[1656]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:26:56.071845 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:26:56.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:56.074070 systemd[1]: Reached target cryptsetup.target. Oct 2 19:26:56.078746 systemd[1]: Starting lvm2-activation.service... Oct 2 19:26:56.093935 lvm[1657]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:26:56.133970 systemd[1]: Finished lvm2-activation.service. Oct 2 19:26:56.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:56.136059 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:26:56.138005 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:26:56.138077 systemd[1]: Reached target local-fs.target. Oct 2 19:26:56.139976 systemd[1]: Reached target machines.target. Oct 2 19:26:56.144645 systemd[1]: Starting ldconfig.service... Oct 2 19:26:56.147110 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:26:56.147437 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:56.151112 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:26:56.156994 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:26:56.162470 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:26:56.164565 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:26:56.164690 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:26:56.167277 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:26:56.199963 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1659 (bootctl) Oct 2 19:26:56.204021 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:26:56.231314 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:26:56.233962 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:26:56.241326 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:26:56.246543 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:26:56.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:56.325372 systemd-fsck[1667]: fsck.fat 4.2 (2021-01-31) Oct 2 19:26:56.325372 systemd-fsck[1667]: /dev/nvme0n1p1: 236 files, 113463/258078 clusters Oct 2 19:26:56.330840 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:26:56.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:56.336081 systemd[1]: Mounting boot.mount... Oct 2 19:26:56.378585 systemd[1]: Mounted boot.mount. Oct 2 19:26:56.415430 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:26:56.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:56.670089 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:26:56.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:56.675509 systemd[1]: Starting audit-rules.service... Oct 2 19:26:56.682663 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:26:56.687445 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:26:56.691000 audit: BPF prog-id=30 op=LOAD Oct 2 19:26:56.699000 audit: BPF prog-id=31 op=LOAD Oct 2 19:26:56.696047 systemd[1]: Starting systemd-resolved.service... Oct 2 19:26:56.708278 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:26:56.712403 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:26:56.755448 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:26:56.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:56.757638 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:26:56.775000 audit[1688]: SYSTEM_BOOT pid=1688 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:26:56.783511 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:26:56.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:56.928845 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:26:56.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:56.930995 systemd[1]: Reached target time-set.target. Oct 2 19:26:57.009239 systemd-resolved[1685]: Positive Trust Anchors: Oct 2 19:26:57.009270 systemd-resolved[1685]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:26:57.009324 systemd-resolved[1685]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:26:57.058041 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:26:57.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:57.096000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:26:57.096000 audit[1703]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd6d07870 a2=420 a3=0 items=0 ppid=1682 pid=1703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:57.096000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:26:57.101212 systemd[1]: Finished audit-rules.service. Oct 2 19:26:57.103500 augenrules[1703]: No rules Oct 2 19:26:57.103850 systemd-resolved[1685]: Defaulting to hostname 'linux'. Oct 2 19:26:57.109319 systemd[1]: Started systemd-resolved.service. Oct 2 19:26:57.111300 systemd[1]: Reached target network.target. Oct 2 19:26:57.113012 systemd[1]: Reached target nss-lookup.target. Oct 2 19:26:57.153233 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:26:57.154381 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:26:57.284891 systemd-networkd[1545]: eth0: Gained IPv6LL Oct 2 19:26:57.289328 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:26:57.291770 systemd[1]: Reached target network-online.target. Oct 2 19:26:57.296051 systemd-timesyncd[1687]: Contacted time server 108.61.56.35:123 (0.flatcar.pool.ntp.org). Oct 2 19:26:57.296282 systemd-timesyncd[1687]: Initial clock synchronization to Mon 2023-10-02 19:26:56.940904 UTC. Oct 2 19:26:57.453062 ldconfig[1658]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:26:57.458859 systemd[1]: Finished ldconfig.service. Oct 2 19:26:57.463251 systemd[1]: Starting systemd-update-done.service... Oct 2 19:26:57.485933 systemd[1]: Finished systemd-update-done.service. Oct 2 19:26:57.488060 systemd[1]: Reached target sysinit.target. Oct 2 19:26:57.491040 systemd[1]: Started motdgen.path. Oct 2 19:26:57.492637 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:26:57.495340 systemd[1]: Started logrotate.timer. Oct 2 19:26:57.497290 systemd[1]: Started mdadm.timer. Oct 2 19:26:57.498826 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:26:57.500598 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:26:57.500653 systemd[1]: Reached target paths.target. Oct 2 19:26:57.502214 systemd[1]: Reached target timers.target. Oct 2 19:26:57.504845 systemd[1]: Listening on dbus.socket. Oct 2 19:26:57.508472 systemd[1]: Starting docker.socket... Oct 2 19:26:57.520207 systemd[1]: Listening on sshd.socket. Oct 2 19:26:57.522271 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:57.523384 systemd[1]: Listening on docker.socket. Oct 2 19:26:57.525483 systemd[1]: Reached target sockets.target. Oct 2 19:26:57.527568 systemd[1]: Reached target basic.target. Oct 2 19:26:57.529423 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:26:57.529655 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:26:57.541722 systemd[1]: Started amazon-ssm-agent.service. Oct 2 19:26:57.547265 systemd[1]: Starting containerd.service... Oct 2 19:26:57.556055 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:26:57.560451 systemd[1]: Starting dbus.service... Oct 2 19:26:57.564186 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:26:57.569276 systemd[1]: Starting extend-filesystems.service... Oct 2 19:26:57.571044 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:26:57.573552 systemd[1]: Starting motdgen.service... Oct 2 19:26:57.583676 systemd[1]: Started nvidia.service. Oct 2 19:26:57.595080 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:26:57.600540 systemd[1]: Starting prepare-critools.service... Oct 2 19:26:57.606105 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:26:57.618164 systemd[1]: Starting sshd-keygen.service... Oct 2 19:26:57.625463 jq[1715]: false Oct 2 19:26:57.625762 systemd[1]: Starting systemd-logind.service... Oct 2 19:26:57.627921 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:57.628081 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:26:57.629007 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:26:57.633148 systemd[1]: Starting update-engine.service... Oct 2 19:26:57.637751 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:26:57.667939 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:26:57.668337 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:26:57.761317 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:26:57.761786 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:26:57.766402 jq[1725]: true Oct 2 19:26:57.868823 tar[1731]: ./ Oct 2 19:26:57.868823 tar[1731]: ./loopback Oct 2 19:26:57.889765 tar[1730]: crictl Oct 2 19:26:57.910019 jq[1738]: true Oct 2 19:26:57.929310 dbus-daemon[1714]: [system] SELinux support is enabled Oct 2 19:26:57.929607 systemd[1]: Started dbus.service. Oct 2 19:26:57.934963 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:26:57.935035 systemd[1]: Reached target system-config.target. Oct 2 19:26:57.936966 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:26:57.937011 systemd[1]: Reached target user-config.target. Oct 2 19:26:57.970242 dbus-daemon[1714]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1545 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:26:57.976063 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:26:57.999142 update_engine[1724]: I1002 19:26:57.993427 1724 main.cc:92] Flatcar Update Engine starting Oct 2 19:26:58.012603 systemd[1]: Started update-engine.service. Oct 2 19:26:58.015663 update_engine[1724]: I1002 19:26:58.015615 1724 update_check_scheduler.cc:74] Next update check in 11m7s Oct 2 19:26:58.017611 systemd[1]: Started locksmithd.service. Oct 2 19:26:58.034746 extend-filesystems[1716]: Found nvme0n1 Oct 2 19:26:58.034746 extend-filesystems[1716]: Found nvme0n1p1 Oct 2 19:26:58.034746 extend-filesystems[1716]: Found nvme0n1p2 Oct 2 19:26:58.034746 extend-filesystems[1716]: Found nvme0n1p3 Oct 2 19:26:58.034746 extend-filesystems[1716]: Found usr Oct 2 19:26:58.034746 extend-filesystems[1716]: Found nvme0n1p4 Oct 2 19:26:58.034746 extend-filesystems[1716]: Found nvme0n1p6 Oct 2 19:26:58.034746 extend-filesystems[1716]: Found nvme0n1p7 Oct 2 19:26:58.034746 extend-filesystems[1716]: Found nvme0n1p9 Oct 2 19:26:58.034746 extend-filesystems[1716]: Checking size of /dev/nvme0n1p9 Oct 2 19:26:58.070386 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:26:58.070835 systemd[1]: Finished motdgen.service. Oct 2 19:26:58.086551 amazon-ssm-agent[1711]: 2023/10/02 19:26:58 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 19:26:58.105452 amazon-ssm-agent[1711]: Initializing new seelog logger Oct 2 19:26:58.113984 amazon-ssm-agent[1711]: New Seelog Logger Creation Complete Oct 2 19:26:58.114353 amazon-ssm-agent[1711]: 2023/10/02 19:26:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:26:58.114582 amazon-ssm-agent[1711]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:26:58.115230 amazon-ssm-agent[1711]: 2023/10/02 19:26:58 processing appconfig overrides Oct 2 19:26:58.166208 extend-filesystems[1716]: Resized partition /dev/nvme0n1p9 Oct 2 19:26:58.200135 extend-filesystems[1781]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:26:58.229162 tar[1731]: ./bandwidth Oct 2 19:26:58.232728 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 19:26:58.295735 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 19:26:58.308939 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:26:58.331407 systemd-logind[1723]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:26:58.332404 systemd-logind[1723]: New seat seat0. Oct 2 19:26:58.334331 extend-filesystems[1781]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 19:26:58.334331 extend-filesystems[1781]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 19:26:58.334331 extend-filesystems[1781]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 19:26:58.372996 extend-filesystems[1716]: Resized filesystem in /dev/nvme0n1p9 Oct 2 19:26:58.376970 bash[1796]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:26:58.335581 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:26:58.379093 env[1740]: time="2023-10-02T19:26:58.376505147Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:26:58.336683 systemd[1]: Finished extend-filesystems.service. Oct 2 19:26:58.353609 systemd[1]: Started systemd-logind.service. Oct 2 19:26:58.374072 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:26:58.492684 tar[1731]: ./ptp Oct 2 19:26:58.617596 env[1740]: time="2023-10-02T19:26:58.617514732Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:26:58.619202 env[1740]: time="2023-10-02T19:26:58.619135063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:58.621954 dbus-daemon[1714]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:26:58.622193 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:26:58.626548 dbus-daemon[1714]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1756 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:26:58.631106 systemd[1]: Starting polkit.service... Oct 2 19:26:58.651721 env[1740]: time="2023-10-02T19:26:58.651628620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:58.651860 env[1740]: time="2023-10-02T19:26:58.651728721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:58.652155 env[1740]: time="2023-10-02T19:26:58.652104719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:58.652241 env[1740]: time="2023-10-02T19:26:58.652150555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:58.652241 env[1740]: time="2023-10-02T19:26:58.652182698Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:26:58.652241 env[1740]: time="2023-10-02T19:26:58.652206906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:58.652404 env[1740]: time="2023-10-02T19:26:58.652365996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:58.653158 env[1740]: time="2023-10-02T19:26:58.653105882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:58.656205 env[1740]: time="2023-10-02T19:26:58.656137640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:58.656329 env[1740]: time="2023-10-02T19:26:58.656216583Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:26:58.656444 env[1740]: time="2023-10-02T19:26:58.656402082Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:26:58.656535 env[1740]: time="2023-10-02T19:26:58.656460842Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:26:58.664101 env[1740]: time="2023-10-02T19:26:58.664013534Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:26:58.664251 env[1740]: time="2023-10-02T19:26:58.664107281Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:26:58.664251 env[1740]: time="2023-10-02T19:26:58.664164217Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:26:58.664380 env[1740]: time="2023-10-02T19:26:58.664255201Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:26:58.664380 env[1740]: time="2023-10-02T19:26:58.664289672Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:26:58.664380 env[1740]: time="2023-10-02T19:26:58.664347744Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:26:58.664533 env[1740]: time="2023-10-02T19:26:58.664378557Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:26:58.665162 env[1740]: time="2023-10-02T19:26:58.665078078Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:26:58.665162 env[1740]: time="2023-10-02T19:26:58.665150622Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:26:58.665297 env[1740]: time="2023-10-02T19:26:58.665190139Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:26:58.665297 env[1740]: time="2023-10-02T19:26:58.665220700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:26:58.665297 env[1740]: time="2023-10-02T19:26:58.665249346Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:26:58.665545 env[1740]: time="2023-10-02T19:26:58.665481644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:26:58.665742 env[1740]: time="2023-10-02T19:26:58.665682407Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:26:58.666265 env[1740]: time="2023-10-02T19:26:58.666223905Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:26:58.666343 env[1740]: time="2023-10-02T19:26:58.666281701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.666343 env[1740]: time="2023-10-02T19:26:58.666317411Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:26:58.666525 env[1740]: time="2023-10-02T19:26:58.666486856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.666635 env[1740]: time="2023-10-02T19:26:58.666529068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.666635 env[1740]: time="2023-10-02T19:26:58.666585867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.666635 env[1740]: time="2023-10-02T19:26:58.666616990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.666802 env[1740]: time="2023-10-02T19:26:58.666648216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.666802 env[1740]: time="2023-10-02T19:26:58.666680967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.666802 env[1740]: time="2023-10-02T19:26:58.666727319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.666802 env[1740]: time="2023-10-02T19:26:58.666756171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.666802 env[1740]: time="2023-10-02T19:26:58.666789221Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:26:58.667070 env[1740]: time="2023-10-02T19:26:58.667044741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.667132 env[1740]: time="2023-10-02T19:26:58.667080634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.667186 env[1740]: time="2023-10-02T19:26:58.667128970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.667186 env[1740]: time="2023-10-02T19:26:58.667158671Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:26:58.667292 env[1740]: time="2023-10-02T19:26:58.667189828Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:26:58.667292 env[1740]: time="2023-10-02T19:26:58.667216674Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:26:58.667292 env[1740]: time="2023-10-02T19:26:58.667252143Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:26:58.667433 env[1740]: time="2023-10-02T19:26:58.667312852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:26:58.670900 env[1740]: time="2023-10-02T19:26:58.667674069Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:26:58.672056 env[1740]: time="2023-10-02T19:26:58.670915809Z" level=info msg="Connect containerd service" Oct 2 19:26:58.672056 env[1740]: time="2023-10-02T19:26:58.670997527Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:26:58.686173 env[1740]: time="2023-10-02T19:26:58.686097200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:26:58.687793 env[1740]: time="2023-10-02T19:26:58.687737645Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:26:58.688035 env[1740]: time="2023-10-02T19:26:58.687991044Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:26:58.688203 systemd[1]: Started containerd.service. Oct 2 19:26:58.691375 env[1740]: time="2023-10-02T19:26:58.688628548Z" level=info msg="containerd successfully booted in 0.380033s" Oct 2 19:26:58.700523 polkitd[1806]: Started polkitd version 121 Oct 2 19:26:58.701247 env[1740]: time="2023-10-02T19:26:58.701157391Z" level=info msg="Start subscribing containerd event" Oct 2 19:26:58.701332 env[1740]: time="2023-10-02T19:26:58.701279646Z" level=info msg="Start recovering state" Oct 2 19:26:58.701409 env[1740]: time="2023-10-02T19:26:58.701390136Z" level=info msg="Start event monitor" Oct 2 19:26:58.701465 env[1740]: time="2023-10-02T19:26:58.701414218Z" level=info msg="Start snapshots syncer" Oct 2 19:26:58.701465 env[1740]: time="2023-10-02T19:26:58.701436614Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:26:58.701465 env[1740]: time="2023-10-02T19:26:58.701455008Z" level=info msg="Start streaming server" Oct 2 19:26:58.738609 polkitd[1806]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:26:58.738932 polkitd[1806]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:26:58.751944 polkitd[1806]: Finished loading, compiling and executing 2 rules Oct 2 19:26:58.753179 dbus-daemon[1714]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:26:58.753413 systemd[1]: Started polkit.service. Oct 2 19:26:58.758792 polkitd[1806]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:26:58.775615 tar[1731]: ./vlan Oct 2 19:26:58.791715 systemd-hostnamed[1756]: Hostname set to (transient) Oct 2 19:26:58.791875 systemd-resolved[1685]: System hostname changed to 'ip-172-31-20-240'. Oct 2 19:26:58.820557 coreos-metadata[1713]: Oct 02 19:26:58.820 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:26:58.822599 coreos-metadata[1713]: Oct 02 19:26:58.822 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 19:26:58.823803 coreos-metadata[1713]: Oct 02 19:26:58.823 INFO Fetch successful Oct 2 19:26:58.823803 coreos-metadata[1713]: Oct 02 19:26:58.823 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:26:58.826176 coreos-metadata[1713]: Oct 02 19:26:58.826 INFO Fetch successful Oct 2 19:26:58.834907 unknown[1713]: wrote ssh authorized keys file for user: core Oct 2 19:26:58.867470 update-ssh-keys[1817]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:26:58.868571 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:26:58.944426 tar[1731]: ./host-device Oct 2 19:26:59.058005 amazon-ssm-agent[1711]: 2023-10-02 19:26:59 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-0f7d01baa5c7af7cb is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-0f7d01baa5c7af7cb because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:26:59.058005 amazon-ssm-agent[1711]: status code: 400, request id: 786b3362-fccf-43f6-911a-04e3fd4c70ce Oct 2 19:26:59.058005 amazon-ssm-agent[1711]: 2023-10-02 19:26:59 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 19:26:59.075214 tar[1731]: ./tuning Oct 2 19:26:59.181678 tar[1731]: ./vrf Oct 2 19:26:59.290183 tar[1731]: ./sbr Oct 2 19:26:59.387086 tar[1731]: ./tap Oct 2 19:26:59.495798 tar[1731]: ./dhcp Oct 2 19:26:59.878032 tar[1731]: ./static Oct 2 19:26:59.946259 tar[1731]: ./firewall Oct 2 19:26:59.955543 systemd[1]: Finished prepare-critools.service. Oct 2 19:27:00.020008 tar[1731]: ./macvlan Oct 2 19:27:00.082817 tar[1731]: ./dummy Oct 2 19:27:00.144914 tar[1731]: ./bridge Oct 2 19:27:00.212454 tar[1731]: ./ipvlan Oct 2 19:27:00.273800 tar[1731]: ./portmap Oct 2 19:27:00.332351 tar[1731]: ./host-local Oct 2 19:27:00.414544 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:27:00.499705 locksmithd[1761]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:27:02.119279 sshd_keygen[1747]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:27:02.176187 systemd[1]: Finished sshd-keygen.service. Oct 2 19:27:02.180616 systemd[1]: Starting issuegen.service... Oct 2 19:27:02.199406 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:27:02.199810 systemd[1]: Finished issuegen.service. Oct 2 19:27:02.205123 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:27:02.226555 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:27:02.231271 systemd[1]: Started getty@tty1.service. Oct 2 19:27:02.235654 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:27:02.237833 systemd[1]: Reached target getty.target. Oct 2 19:27:02.239784 systemd[1]: Reached target multi-user.target. Oct 2 19:27:02.244108 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:27:02.267272 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:27:02.267649 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:27:02.269926 systemd[1]: Startup finished in 1.238s (kernel) + 12.408s (initrd) + 13.256s (userspace) = 26.904s. Oct 2 19:27:06.489602 systemd[1]: Created slice system-sshd.slice. Oct 2 19:27:06.492751 systemd[1]: Started sshd@0-172.31.20.240:22-139.178.89.65:60444.service. Oct 2 19:27:06.683735 sshd[1922]: Accepted publickey for core from 139.178.89.65 port 60444 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:27:06.688886 sshd[1922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:27:06.707537 systemd[1]: Created slice user-500.slice. Oct 2 19:27:06.712168 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:27:06.718749 systemd-logind[1723]: New session 1 of user core. Oct 2 19:27:06.735411 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:27:06.738673 systemd[1]: Starting user@500.service... Oct 2 19:27:06.751308 (systemd)[1925]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:27:06.956658 systemd[1925]: Queued start job for default target default.target. Oct 2 19:27:06.959024 systemd[1925]: Reached target paths.target. Oct 2 19:27:06.959264 systemd[1925]: Reached target sockets.target. Oct 2 19:27:06.959404 systemd[1925]: Reached target timers.target. Oct 2 19:27:06.959555 systemd[1925]: Reached target basic.target. Oct 2 19:27:06.959814 systemd[1925]: Reached target default.target. Oct 2 19:27:06.959912 systemd[1]: Started user@500.service. Oct 2 19:27:06.961537 systemd[1925]: Startup finished in 192ms. Oct 2 19:27:06.961926 systemd[1]: Started session-1.scope. Oct 2 19:27:07.112894 systemd[1]: Started sshd@1-172.31.20.240:22-139.178.89.65:60452.service. Oct 2 19:27:07.297781 sshd[1934]: Accepted publickey for core from 139.178.89.65 port 60452 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:27:07.301538 sshd[1934]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:27:07.308895 systemd-logind[1723]: New session 2 of user core. Oct 2 19:27:07.310822 systemd[1]: Started session-2.scope. Oct 2 19:27:07.458746 sshd[1934]: pam_unix(sshd:session): session closed for user core Oct 2 19:27:07.464751 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:27:07.465832 systemd[1]: sshd@1-172.31.20.240:22-139.178.89.65:60452.service: Deactivated successfully. Oct 2 19:27:07.467399 systemd-logind[1723]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:27:07.469040 systemd-logind[1723]: Removed session 2. Oct 2 19:27:07.488798 systemd[1]: Started sshd@2-172.31.20.240:22-139.178.89.65:60454.service. Oct 2 19:27:07.668471 sshd[1940]: Accepted publickey for core from 139.178.89.65 port 60454 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:27:07.672546 sshd[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:27:07.681237 systemd-logind[1723]: New session 3 of user core. Oct 2 19:27:07.682331 systemd[1]: Started session-3.scope. Oct 2 19:27:07.816998 sshd[1940]: pam_unix(sshd:session): session closed for user core Oct 2 19:27:07.823174 systemd[1]: sshd@2-172.31.20.240:22-139.178.89.65:60454.service: Deactivated successfully. Oct 2 19:27:07.824413 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:27:07.825648 systemd-logind[1723]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:27:07.827565 systemd-logind[1723]: Removed session 3. Oct 2 19:27:07.847792 systemd[1]: Started sshd@3-172.31.20.240:22-139.178.89.65:60468.service. Oct 2 19:27:08.022002 sshd[1946]: Accepted publickey for core from 139.178.89.65 port 60468 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:27:08.025721 sshd[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:27:08.034913 systemd[1]: Started session-4.scope. Oct 2 19:27:08.035844 systemd-logind[1723]: New session 4 of user core. Oct 2 19:27:08.178482 sshd[1946]: pam_unix(sshd:session): session closed for user core Oct 2 19:27:08.184937 systemd-logind[1723]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:27:08.185525 systemd[1]: sshd@3-172.31.20.240:22-139.178.89.65:60468.service: Deactivated successfully. Oct 2 19:27:08.186825 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:27:08.188371 systemd-logind[1723]: Removed session 4. Oct 2 19:27:08.208202 systemd[1]: Started sshd@4-172.31.20.240:22-139.178.89.65:60476.service. Oct 2 19:27:08.385313 sshd[1952]: Accepted publickey for core from 139.178.89.65 port 60476 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:27:08.388987 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:27:08.398300 systemd-logind[1723]: New session 5 of user core. Oct 2 19:27:08.399297 systemd[1]: Started session-5.scope. Oct 2 19:27:08.529885 sudo[1955]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:27:08.531012 sudo[1955]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:27:08.547013 dbus-daemon[1714]: avc: received setenforce notice (enforcing=1) Oct 2 19:27:08.550571 sudo[1955]: pam_unix(sudo:session): session closed for user root Oct 2 19:27:08.574827 sshd[1952]: pam_unix(sshd:session): session closed for user core Oct 2 19:27:08.582321 systemd[1]: sshd@4-172.31.20.240:22-139.178.89.65:60476.service: Deactivated successfully. Oct 2 19:27:08.584298 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:27:08.585986 systemd-logind[1723]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:27:08.588446 systemd-logind[1723]: Removed session 5. Oct 2 19:27:08.606116 systemd[1]: Started sshd@5-172.31.20.240:22-139.178.89.65:60482.service. Oct 2 19:27:08.790677 sshd[1959]: Accepted publickey for core from 139.178.89.65 port 60482 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:27:08.794783 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:27:08.803388 systemd-logind[1723]: New session 6 of user core. Oct 2 19:27:08.804397 systemd[1]: Started session-6.scope. Oct 2 19:27:08.925953 sudo[1963]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:27:08.926965 sudo[1963]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:27:08.934711 sudo[1963]: pam_unix(sudo:session): session closed for user root Oct 2 19:27:08.949108 sudo[1962]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:27:08.949639 sudo[1962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:27:08.974072 systemd[1]: Stopping audit-rules.service... Oct 2 19:27:08.978000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:27:08.981354 kernel: kauditd_printk_skb: 78 callbacks suppressed Oct 2 19:27:08.981459 kernel: audit: type=1305 audit(1696274828.978:169): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:27:08.978000 audit[1966]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff324d1a0 a2=420 a3=0 items=0 ppid=1 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:08.987887 auditctl[1966]: No rules Oct 2 19:27:08.997391 kernel: audit: type=1300 audit(1696274828.978:169): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff324d1a0 a2=420 a3=0 items=0 ppid=1 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:08.998015 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:27:08.978000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:27:09.001678 kernel: audit: type=1327 audit(1696274828.978:169): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:27:08.998367 systemd[1]: Stopped audit-rules.service. Oct 2 19:27:08.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.010272 kernel: audit: type=1131 audit(1696274828.997:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.009096 systemd[1]: Starting audit-rules.service... Oct 2 19:27:09.071213 augenrules[1983]: No rules Oct 2 19:27:09.072450 systemd[1]: Finished audit-rules.service. Oct 2 19:27:09.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.082613 sudo[1962]: pam_unix(sudo:session): session closed for user root Oct 2 19:27:09.082000 audit[1962]: USER_END pid=1962 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.092185 kernel: audit: type=1130 audit(1696274829.072:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.092307 kernel: audit: type=1106 audit(1696274829.082:172): pid=1962 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.082000 audit[1962]: CRED_DISP pid=1962 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.100873 kernel: audit: type=1104 audit(1696274829.082:173): pid=1962 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.116074 sshd[1959]: pam_unix(sshd:session): session closed for user core Oct 2 19:27:09.117000 audit[1959]: USER_END pid=1959 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:09.117000 audit[1959]: CRED_DISP pid=1959 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:09.130346 systemd[1]: sshd@5-172.31.20.240:22-139.178.89.65:60482.service: Deactivated successfully. Oct 2 19:27:09.131639 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:27:09.139494 kernel: audit: type=1106 audit(1696274829.117:174): pid=1959 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:09.139617 kernel: audit: type=1104 audit(1696274829.117:175): pid=1959 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:09.140644 systemd-logind[1723]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:27:09.141749 kernel: audit: type=1131 audit(1696274829.130:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.20.240:22-139.178.89.65:60482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.20.240:22-139.178.89.65:60482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.151603 systemd[1]: Started sshd@6-172.31.20.240:22-139.178.89.65:60494.service. Oct 2 19:27:09.154377 systemd-logind[1723]: Removed session 6. Oct 2 19:27:09.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.20.240:22-139.178.89.65:60494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.331000 audit[1989]: USER_ACCT pid=1989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:09.333950 sshd[1989]: Accepted publickey for core from 139.178.89.65 port 60494 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:27:09.334000 audit[1989]: CRED_ACQ pid=1989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:09.334000 audit[1989]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8f424a0 a2=3 a3=1 items=0 ppid=1 pid=1989 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:09.334000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:27:09.337163 sshd[1989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:27:09.346789 systemd[1]: Started session-7.scope. Oct 2 19:27:09.347572 systemd-logind[1723]: New session 7 of user core. Oct 2 19:27:09.356000 audit[1989]: USER_START pid=1989 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:09.363000 audit[1991]: CRED_ACQ pid=1991 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:09.467000 audit[1992]: USER_ACCT pid=1992 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.468080 sudo[1992]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:27:09.467000 audit[1992]: CRED_REFR pid=1992 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:09.468671 sudo[1992]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:27:09.471000 audit[1992]: USER_START pid=1992 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:10.152304 systemd[1]: Reloading. Oct 2 19:27:10.356141 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2023-10-02T19:27:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:27:10.366852 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2023-10-02T19:27:10Z" level=info msg="torcx already run" Oct 2 19:27:10.577955 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:27:10.578189 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:27:10.620534 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:27:10.768000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.768000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.768000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.768000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.769000 audit: BPF prog-id=40 op=LOAD Oct 2 19:27:10.769000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:27:10.770000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.770000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.770000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.771000 audit: BPF prog-id=41 op=LOAD Oct 2 19:27:10.772000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:27:10.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.773000 audit: BPF prog-id=42 op=LOAD Oct 2 19:27:10.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.774000 audit: BPF prog-id=43 op=LOAD Oct 2 19:27:10.774000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:27:10.774000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:27:10.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.776000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.776000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.776000 audit: BPF prog-id=44 op=LOAD Oct 2 19:27:10.776000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:27:10.779000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.779000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.779000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.779000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.780000 audit: BPF prog-id=45 op=LOAD Oct 2 19:27:10.780000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:27:10.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.782000 audit: BPF prog-id=46 op=LOAD Oct 2 19:27:10.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.783000 audit: BPF prog-id=47 op=LOAD Oct 2 19:27:10.783000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:27:10.783000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:27:10.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.786000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.786000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.786000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.786000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.786000 audit: BPF prog-id=48 op=LOAD Oct 2 19:27:10.786000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:27:10.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.790000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.790000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.790000 audit: BPF prog-id=49 op=LOAD Oct 2 19:27:10.790000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:27:10.791000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.791000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.791000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.791000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.791000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.791000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.791000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.791000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.792000 audit: BPF prog-id=50 op=LOAD Oct 2 19:27:10.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.793000 audit: BPF prog-id=51 op=LOAD Oct 2 19:27:10.793000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:27:10.793000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:27:10.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.798000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.798000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.798000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.798000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.798000 audit: BPF prog-id=52 op=LOAD Oct 2 19:27:10.798000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:27:10.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.801000 audit: BPF prog-id=53 op=LOAD Oct 2 19:27:10.801000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:27:10.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.802000 audit: BPF prog-id=54 op=LOAD Oct 2 19:27:10.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.803000 audit: BPF prog-id=55 op=LOAD Oct 2 19:27:10.803000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:27:10.803000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:27:10.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.806000 audit: BPF prog-id=56 op=LOAD Oct 2 19:27:10.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.807000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.807000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.807000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.807000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.807000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:10.807000 audit: BPF prog-id=57 op=LOAD Oct 2 19:27:10.807000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:27:10.807000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:27:10.830583 systemd[1]: Started kubelet.service. Oct 2 19:27:10.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:10.871542 systemd[1]: Starting coreos-metadata.service... Oct 2 19:27:11.013025 kubelet[2076]: E1002 19:27:11.012920 2076 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 2 19:27:11.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:27:11.017514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:27:11.017877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:27:11.065363 coreos-metadata[2084]: Oct 02 19:27:11.065 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:27:11.066989 coreos-metadata[2084]: Oct 02 19:27:11.066 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 19:27:11.068035 coreos-metadata[2084]: Oct 02 19:27:11.067 INFO Fetch successful Oct 2 19:27:11.068035 coreos-metadata[2084]: Oct 02 19:27:11.068 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 19:27:11.069035 coreos-metadata[2084]: Oct 02 19:27:11.068 INFO Fetch successful Oct 2 19:27:11.069035 coreos-metadata[2084]: Oct 02 19:27:11.069 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 19:27:11.070024 coreos-metadata[2084]: Oct 02 19:27:11.069 INFO Fetch successful Oct 2 19:27:11.070024 coreos-metadata[2084]: Oct 02 19:27:11.070 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 19:27:11.071019 coreos-metadata[2084]: Oct 02 19:27:11.070 INFO Fetch successful Oct 2 19:27:11.071019 coreos-metadata[2084]: Oct 02 19:27:11.071 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 19:27:11.071922 coreos-metadata[2084]: Oct 02 19:27:11.071 INFO Fetch successful Oct 2 19:27:11.071922 coreos-metadata[2084]: Oct 02 19:27:11.071 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 19:27:11.074922 coreos-metadata[2084]: Oct 02 19:27:11.074 INFO Fetch successful Oct 2 19:27:11.074922 coreos-metadata[2084]: Oct 02 19:27:11.074 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 19:27:11.076063 coreos-metadata[2084]: Oct 02 19:27:11.076 INFO Fetch successful Oct 2 19:27:11.076063 coreos-metadata[2084]: Oct 02 19:27:11.076 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 19:27:11.077250 coreos-metadata[2084]: Oct 02 19:27:11.077 INFO Fetch successful Oct 2 19:27:11.101445 systemd[1]: Finished coreos-metadata.service. Oct 2 19:27:11.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:11.616718 systemd[1]: Stopped kubelet.service. Oct 2 19:27:11.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:11.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:11.660964 systemd[1]: Reloading. Oct 2 19:27:11.820655 /usr/lib/systemd/system-generators/torcx-generator[2140]: time="2023-10-02T19:27:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:27:11.820827 /usr/lib/systemd/system-generators/torcx-generator[2140]: time="2023-10-02T19:27:11Z" level=info msg="torcx already run" Oct 2 19:27:12.083717 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:27:12.083952 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:27:12.127570 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:27:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.277000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.277000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.277000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.277000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.277000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.277000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.278000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.278000 audit: BPF prog-id=58 op=LOAD Oct 2 19:27:12.278000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:27:12.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.280000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.280000 audit: BPF prog-id=59 op=LOAD Oct 2 19:27:12.280000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:27:12.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.281000 audit: BPF prog-id=60 op=LOAD Oct 2 19:27:12.282000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.282000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.282000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.282000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.282000 audit: BPF prog-id=61 op=LOAD Oct 2 19:27:12.282000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:27:12.283000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:27:12.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.285000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.285000 audit: BPF prog-id=62 op=LOAD Oct 2 19:27:12.285000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:27:12.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.289000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.289000 audit: BPF prog-id=63 op=LOAD Oct 2 19:27:12.290000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit: BPF prog-id=64 op=LOAD Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.290000 audit: BPF prog-id=65 op=LOAD Oct 2 19:27:12.290000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:27:12.290000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:27:12.292000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.292000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.292000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.292000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.292000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.292000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.292000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.292000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.292000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.293000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.293000 audit: BPF prog-id=66 op=LOAD Oct 2 19:27:12.293000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:27:12.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit: BPF prog-id=67 op=LOAD Oct 2 19:27:12.296000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit: BPF prog-id=68 op=LOAD Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.296000 audit: BPF prog-id=69 op=LOAD Oct 2 19:27:12.296000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:27:12.296000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:27:12.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.300000 audit: BPF prog-id=70 op=LOAD Oct 2 19:27:12.300000 audit: BPF prog-id=52 op=UNLOAD Oct 2 19:27:12.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit: BPF prog-id=71 op=LOAD Oct 2 19:27:12.302000 audit: BPF prog-id=53 op=UNLOAD Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit: BPF prog-id=72 op=LOAD Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.302000 audit: BPF prog-id=73 op=LOAD Oct 2 19:27:12.302000 audit: BPF prog-id=54 op=UNLOAD Oct 2 19:27:12.302000 audit: BPF prog-id=55 op=UNLOAD Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit: BPF prog-id=74 op=LOAD Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:12.304000 audit: BPF prog-id=75 op=LOAD Oct 2 19:27:12.304000 audit: BPF prog-id=56 op=UNLOAD Oct 2 19:27:12.304000 audit: BPF prog-id=57 op=UNLOAD Oct 2 19:27:12.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:12.341083 systemd[1]: Started kubelet.service. Oct 2 19:27:12.478407 kubelet[2196]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:27:12.478961 kubelet[2196]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:27:12.479058 kubelet[2196]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:27:12.479416 kubelet[2196]: I1002 19:27:12.479359 2196 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:27:13.994226 kubelet[2196]: I1002 19:27:13.994155 2196 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Oct 2 19:27:13.994226 kubelet[2196]: I1002 19:27:13.994210 2196 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:27:13.995005 kubelet[2196]: I1002 19:27:13.994575 2196 server.go:895] "Client rotation is on, will bootstrap in background" Oct 2 19:27:14.002303 kubelet[2196]: I1002 19:27:14.002250 2196 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:27:14.015930 kubelet[2196]: W1002 19:27:14.015884 2196 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:27:14.017318 kubelet[2196]: I1002 19:27:14.017278 2196 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:27:14.018189 kubelet[2196]: I1002 19:27:14.018149 2196 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:27:14.018626 kubelet[2196]: I1002 19:27:14.018589 2196 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 2 19:27:14.019008 kubelet[2196]: I1002 19:27:14.018980 2196 topology_manager.go:138] "Creating topology manager with none policy" Oct 2 19:27:14.019139 kubelet[2196]: I1002 19:27:14.019117 2196 container_manager_linux.go:301] "Creating device plugin manager" Oct 2 19:27:14.019490 kubelet[2196]: I1002 19:27:14.019455 2196 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:27:14.020264 kubelet[2196]: I1002 19:27:14.020241 2196 kubelet.go:393] "Attempting to sync node with API server" Oct 2 19:27:14.020401 kubelet[2196]: I1002 19:27:14.020380 2196 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:27:14.020534 kubelet[2196]: I1002 19:27:14.020513 2196 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:27:14.020751 kubelet[2196]: I1002 19:27:14.020731 2196 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:27:14.022341 kubelet[2196]: E1002 19:27:14.022284 2196 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:14.022504 kubelet[2196]: E1002 19:27:14.022424 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:14.023926 kubelet[2196]: I1002 19:27:14.023891 2196 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:27:14.025000 kubelet[2196]: W1002 19:27:14.024962 2196 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:27:14.026734 kubelet[2196]: I1002 19:27:14.026652 2196 server.go:1232] "Started kubelet" Oct 2 19:27:14.029087 kubelet[2196]: I1002 19:27:14.029034 2196 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:27:14.030742 kubelet[2196]: E1002 19:27:14.030632 2196 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:27:14.030742 kubelet[2196]: E1002 19:27:14.030732 2196 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:27:14.034532 kubelet[2196]: I1002 19:27:14.034477 2196 server.go:462] "Adding debug handlers to kubelet server" Oct 2 19:27:14.034000 audit[2196]: AVC avc: denied { mac_admin } for pid=2196 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:14.037716 kernel: kauditd_printk_skb: 429 callbacks suppressed Oct 2 19:27:14.037885 kernel: audit: type=1400 audit(1696274834.034:604): avc: denied { mac_admin } for pid=2196 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:14.045946 kubelet[2196]: I1002 19:27:14.045891 2196 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:27:14.047017 kubelet[2196]: I1002 19:27:14.046981 2196 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 2 19:27:14.034000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:14.049849 kubelet[2196]: I1002 19:27:14.049798 2196 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:27:14.051537 kernel: audit: type=1401 audit(1696274834.034:604): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:14.051676 kernel: audit: type=1300 audit(1696274834.034:604): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e9e3c0 a1=4000d6f368 a2=4000e9e390 a3=25 items=0 ppid=1 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.034000 audit[2196]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e9e3c0 a1=4000d6f368 a2=4000e9e390 a3=25 items=0 ppid=1 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.058318 kubelet[2196]: I1002 19:27:14.058263 2196 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:27:14.062031 kubelet[2196]: I1002 19:27:14.061988 2196 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:27:14.034000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:14.071894 kubelet[2196]: E1002 19:27:14.071860 2196 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.20.240\" not found" Oct 2 19:27:14.072189 kubelet[2196]: I1002 19:27:14.072157 2196 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 2 19:27:14.072476 kubelet[2196]: I1002 19:27:14.072446 2196 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:27:14.072752 kubelet[2196]: I1002 19:27:14.072721 2196 reconciler_new.go:29] "Reconciler: start to sync state" Oct 2 19:27:14.075361 kernel: audit: type=1327 audit(1696274834.034:604): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:14.075555 kernel: audit: type=1400 audit(1696274834.048:605): avc: denied { mac_admin } for pid=2196 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:14.048000 audit[2196]: AVC avc: denied { mac_admin } for pid=2196 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:14.076424 kubelet[2196]: W1002 19:27:14.076364 2196 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.20.240" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:14.076859 kubelet[2196]: E1002 19:27:14.076810 2196 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.20.240" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:14.077322 kubelet[2196]: W1002 19:27:14.077272 2196 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:14.077540 kubelet[2196]: E1002 19:27:14.077511 2196 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:14.077941 kubelet[2196]: E1002 19:27:14.077790 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f96b975314", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 26615572, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 26615572, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.078514 kubelet[2196]: W1002 19:27:14.078466 2196 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:14.078751 kubelet[2196]: E1002 19:27:14.078725 2196 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:14.084652 kubelet[2196]: E1002 19:27:14.084591 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.20.240\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:27:14.084853 kubelet[2196]: E1002 19:27:14.084732 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f96bd560e3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 30682339, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 30682339, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.048000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:14.091325 kernel: audit: type=1401 audit(1696274834.048:605): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:14.048000 audit[2196]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d9b3a0 a1=4000d6f380 a2=4000e9e450 a3=25 items=0 ppid=1 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.109326 kernel: audit: type=1300 audit(1696274834.048:605): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d9b3a0 a1=4000d6f380 a2=4000e9e450 a3=25 items=0 ppid=1 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.048000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:14.123247 kernel: audit: type=1327 audit(1696274834.048:605): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:14.151891 kubelet[2196]: E1002 19:27:14.151730 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f0b030", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.20.240 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149912624, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149912624, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.152320 kubelet[2196]: I1002 19:27:14.152262 2196 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:27:14.152465 kubelet[2196]: I1002 19:27:14.152345 2196 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:27:14.152465 kubelet[2196]: I1002 19:27:14.152384 2196 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:27:14.153479 kubelet[2196]: E1002 19:27:14.153323 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f0d42f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.20.240 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149921839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149921839, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.154661 kubelet[2196]: I1002 19:27:14.154605 2196 policy_none.go:49] "None policy: Start" Oct 2 19:27:14.155365 kubelet[2196]: E1002 19:27:14.155195 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f10147", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.20.240 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149933383, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149933383, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.156434 kubelet[2196]: I1002 19:27:14.156394 2196 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:27:14.156678 kubelet[2196]: I1002 19:27:14.156652 2196 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:27:14.167000 audit[2212]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:14.167000 audit[2212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffbc0d520 a2=0 a3=1 items=0 ppid=2196 pid=2212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.185835 kernel: audit: type=1325 audit(1696274834.167:606): table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:14.186002 kernel: audit: type=1300 audit(1696274834.167:606): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffbc0d520 a2=0 a3=1 items=0 ppid=2196 pid=2212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.188514 kubelet[2196]: I1002 19:27:14.188479 2196 kubelet_node_status.go:70] "Attempting to register node" node="172.31.20.240" Oct 2 19:27:14.188905 systemd[1]: Created slice kubepods.slice. Oct 2 19:27:14.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:27:14.171000 audit[2214]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:14.171000 audit[2214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffd194c240 a2=0 a3=1 items=0 ppid=2196 pid=2214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.171000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:27:14.193252 kubelet[2196]: E1002 19:27:14.193142 2196 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.20.240" Oct 2 19:27:14.193805 kubelet[2196]: E1002 19:27:14.193596 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f0b030", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.20.240 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149912624, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 188422743, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events "172.31.20.240.178a60f972f0b030" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.199142 kubelet[2196]: E1002 19:27:14.198981 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f0d42f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.20.240 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149921839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 188431016, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events "172.31.20.240.178a60f972f0d42f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.201845 kubelet[2196]: E1002 19:27:14.200872 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f10147", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.20.240 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149933383, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 188435420, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events "172.31.20.240.178a60f972f10147" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.204360 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:27:14.212488 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:27:14.220324 kubelet[2196]: I1002 19:27:14.220285 2196 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:27:14.221582 kubelet[2196]: I1002 19:27:14.221543 2196 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:27:14.222195 kubelet[2196]: I1002 19:27:14.222154 2196 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:27:14.220000 audit[2196]: AVC avc: denied { mac_admin } for pid=2196 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:14.220000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:14.220000 audit[2196]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a6c900 a1=4000a35a10 a2=4000a6c8d0 a3=25 items=0 ppid=1 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.220000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:14.226669 kubelet[2196]: E1002 19:27:14.226630 2196 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.20.240\" not found" Oct 2 19:27:14.227543 kubelet[2196]: E1002 19:27:14.226976 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f9775e4ef9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 224205561, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 224205561, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.179000 audit[2216]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:14.179000 audit[2216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdf4e4d30 a2=0 a3=1 items=0 ppid=2196 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.179000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:27:14.235000 audit[2221]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:14.235000 audit[2221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe928b1a0 a2=0 a3=1 items=0 ppid=2196 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:27:14.291525 kubelet[2196]: E1002 19:27:14.287237 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.20.240\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:27:14.302000 audit[2226]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:14.302000 audit[2226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffea9e4f50 a2=0 a3=1 items=0 ppid=2196 pid=2226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.302000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:27:14.304554 kubelet[2196]: I1002 19:27:14.304492 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 2 19:27:14.307000 audit[2228]: NETFILTER_CFG table=mangle:7 family=2 entries=1 op=nft_register_chain pid=2228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:14.307000 audit[2228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdfcd24e0 a2=0 a3=1 items=0 ppid=2196 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:27:14.307000 audit[2227]: NETFILTER_CFG table=mangle:8 family=10 entries=2 op=nft_register_chain pid=2227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:14.307000 audit[2227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffceb7ae10 a2=0 a3=1 items=0 ppid=2196 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.307000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:27:14.309151 kubelet[2196]: I1002 19:27:14.309095 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 2 19:27:14.309420 kubelet[2196]: I1002 19:27:14.309367 2196 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 2 19:27:14.309420 kubelet[2196]: I1002 19:27:14.309415 2196 kubelet.go:2303] "Starting kubelet main sync loop" Oct 2 19:27:14.309565 kubelet[2196]: E1002 19:27:14.309514 2196 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:27:14.311936 kubelet[2196]: W1002 19:27:14.311903 2196 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:14.312178 kubelet[2196]: E1002 19:27:14.312156 2196 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:14.313000 audit[2229]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:14.313000 audit[2229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffffc016900 a2=0 a3=1 items=0 ppid=2196 pid=2229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.313000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:27:14.314000 audit[2230]: NETFILTER_CFG table=mangle:10 family=10 entries=1 op=nft_register_chain pid=2230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:14.314000 audit[2230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea833600 a2=0 a3=1 items=0 ppid=2196 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:27:14.317000 audit[2231]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:14.317000 audit[2231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd38ad6f0 a2=0 a3=1 items=0 ppid=2196 pid=2231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:27:14.320000 audit[2232]: NETFILTER_CFG table=nat:12 family=10 entries=2 op=nft_register_chain pid=2232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:14.320000 audit[2232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffff474d5f0 a2=0 a3=1 items=0 ppid=2196 pid=2232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:27:14.324000 audit[2233]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=2233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:14.324000 audit[2233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdb734570 a2=0 a3=1 items=0 ppid=2196 pid=2233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:14.324000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:27:14.394399 kubelet[2196]: I1002 19:27:14.394365 2196 kubelet_node_status.go:70] "Attempting to register node" node="172.31.20.240" Oct 2 19:27:14.396433 kubelet[2196]: E1002 19:27:14.396318 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f0b030", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.20.240 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149912624, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 394312671, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events "172.31.20.240.178a60f972f0b030" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.396927 kubelet[2196]: E1002 19:27:14.396880 2196 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.20.240" Oct 2 19:27:14.398302 kubelet[2196]: E1002 19:27:14.398203 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f0d42f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.20.240 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149921839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 394320848, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events "172.31.20.240.178a60f972f0d42f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.399942 kubelet[2196]: E1002 19:27:14.399837 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f10147", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.20.240 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149933383, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 394328261, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events "172.31.20.240.178a60f972f10147" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.695772 kubelet[2196]: E1002 19:27:14.694862 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.20.240\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:27:14.798432 kubelet[2196]: I1002 19:27:14.798367 2196 kubelet_node_status.go:70] "Attempting to register node" node="172.31.20.240" Oct 2 19:27:14.801603 kubelet[2196]: E1002 19:27:14.801564 2196 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.20.240" Oct 2 19:27:14.803338 kubelet[2196]: E1002 19:27:14.802580 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f0b030", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.20.240 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149912624, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 798318872, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events "172.31.20.240.178a60f972f0b030" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.805517 kubelet[2196]: E1002 19:27:14.805381 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f0d42f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.20.240 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149921839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 798326536, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events "172.31.20.240.178a60f972f0d42f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:14.807812 kubelet[2196]: E1002 19:27:14.807613 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240.178a60f972f10147", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.20.240", UID:"172.31.20.240", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.20.240 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.20.240"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 149933383, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 14, 798330797, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.20.240"}': 'events "172.31.20.240.178a60f972f10147" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:15.001590 kubelet[2196]: I1002 19:27:15.001503 2196 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:27:15.023054 kubelet[2196]: E1002 19:27:15.022990 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:15.453494 kubelet[2196]: E1002 19:27:15.453361 2196 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.20.240" not found Oct 2 19:27:15.503039 kubelet[2196]: E1002 19:27:15.502991 2196 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.20.240\" not found" node="172.31.20.240" Oct 2 19:27:15.604496 kubelet[2196]: I1002 19:27:15.604453 2196 kubelet_node_status.go:70] "Attempting to register node" node="172.31.20.240" Oct 2 19:27:15.611017 kubelet[2196]: I1002 19:27:15.610974 2196 kubelet_node_status.go:73] "Successfully registered node" node="172.31.20.240" Oct 2 19:27:15.634576 kubelet[2196]: I1002 19:27:15.634540 2196 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:27:15.635623 env[1740]: time="2023-10-02T19:27:15.635400062Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:27:15.636500 kubelet[2196]: I1002 19:27:15.636465 2196 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:27:15.878895 sudo[1992]: pam_unix(sudo:session): session closed for user root Oct 2 19:27:15.877000 audit[1992]: USER_END pid=1992 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:15.878000 audit[1992]: CRED_DISP pid=1992 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:15.903455 sshd[1989]: pam_unix(sshd:session): session closed for user core Oct 2 19:27:15.904000 audit[1989]: USER_END pid=1989 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:15.904000 audit[1989]: CRED_DISP pid=1989 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:15.909526 systemd-logind[1723]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:27:15.910380 systemd[1]: sshd@6-172.31.20.240:22-139.178.89.65:60494.service: Deactivated successfully. Oct 2 19:27:15.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.20.240:22-139.178.89.65:60494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:15.911818 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:27:15.913619 systemd-logind[1723]: Removed session 7. Oct 2 19:27:16.023830 kubelet[2196]: E1002 19:27:16.023773 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:16.023830 kubelet[2196]: I1002 19:27:16.023779 2196 apiserver.go:52] "Watching apiserver" Oct 2 19:27:16.027681 kubelet[2196]: I1002 19:27:16.027608 2196 topology_manager.go:215] "Topology Admit Handler" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" podNamespace="kube-system" podName="cilium-lvcwn" Oct 2 19:27:16.027945 kubelet[2196]: I1002 19:27:16.027875 2196 topology_manager.go:215] "Topology Admit Handler" podUID="5602b131-7754-45d2-9a03-db3aa863c2fb" podNamespace="kube-system" podName="kube-proxy-lhmh2" Oct 2 19:27:16.039921 systemd[1]: Created slice kubepods-besteffort-pod5602b131_7754_45d2_9a03_db3aa863c2fb.slice. Oct 2 19:27:16.071607 systemd[1]: Created slice kubepods-burstable-pod2bd04426_239c_4bb0_b854_c99ed0496ddc.slice. Oct 2 19:27:16.074394 kubelet[2196]: I1002 19:27:16.074333 2196 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:27:16.103883 kubelet[2196]: I1002 19:27:16.103838 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-etc-cni-netd\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.104136 kubelet[2196]: I1002 19:27:16.104110 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-xtables-lock\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.104300 kubelet[2196]: I1002 19:27:16.104276 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-host-proc-sys-kernel\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.104467 kubelet[2196]: I1002 19:27:16.104444 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lpb\" (UniqueName: \"kubernetes.io/projected/2bd04426-239c-4bb0-b854-c99ed0496ddc-kube-api-access-x8lpb\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.104659 kubelet[2196]: I1002 19:27:16.104637 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5602b131-7754-45d2-9a03-db3aa863c2fb-lib-modules\") pod \"kube-proxy-lhmh2\" (UID: \"5602b131-7754-45d2-9a03-db3aa863c2fb\") " pod="kube-system/kube-proxy-lhmh2" Oct 2 19:27:16.104876 kubelet[2196]: I1002 19:27:16.104854 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-bpf-maps\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.105088 kubelet[2196]: I1002 19:27:16.105065 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-cgroup\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.105270 kubelet[2196]: I1002 19:27:16.105247 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-host-proc-sys-net\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.105494 kubelet[2196]: I1002 19:27:16.105470 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-run\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.105664 kubelet[2196]: I1002 19:27:16.105642 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5602b131-7754-45d2-9a03-db3aa863c2fb-xtables-lock\") pod \"kube-proxy-lhmh2\" (UID: \"5602b131-7754-45d2-9a03-db3aa863c2fb\") " pod="kube-system/kube-proxy-lhmh2" Oct 2 19:27:16.105856 kubelet[2196]: I1002 19:27:16.105834 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2bd04426-239c-4bb0-b854-c99ed0496ddc-clustermesh-secrets\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.106071 kubelet[2196]: I1002 19:27:16.106049 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cni-path\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.106252 kubelet[2196]: I1002 19:27:16.106231 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-lib-modules\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.106472 kubelet[2196]: I1002 19:27:16.106436 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-config-path\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.106566 kubelet[2196]: I1002 19:27:16.106543 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2bd04426-239c-4bb0-b854-c99ed0496ddc-hubble-tls\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.106679 kubelet[2196]: I1002 19:27:16.106645 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5602b131-7754-45d2-9a03-db3aa863c2fb-kube-proxy\") pod \"kube-proxy-lhmh2\" (UID: \"5602b131-7754-45d2-9a03-db3aa863c2fb\") " pod="kube-system/kube-proxy-lhmh2" Oct 2 19:27:16.106929 kubelet[2196]: I1002 19:27:16.106905 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6sjb\" (UniqueName: \"kubernetes.io/projected/5602b131-7754-45d2-9a03-db3aa863c2fb-kube-api-access-p6sjb\") pod \"kube-proxy-lhmh2\" (UID: \"5602b131-7754-45d2-9a03-db3aa863c2fb\") " pod="kube-system/kube-proxy-lhmh2" Oct 2 19:27:16.107094 kubelet[2196]: I1002 19:27:16.107072 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-hostproc\") pod \"cilium-lvcwn\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " pod="kube-system/cilium-lvcwn" Oct 2 19:27:16.373641 env[1740]: time="2023-10-02T19:27:16.373001513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lhmh2,Uid:5602b131-7754-45d2-9a03-db3aa863c2fb,Namespace:kube-system,Attempt:0,}" Oct 2 19:27:16.384505 env[1740]: time="2023-10-02T19:27:16.384411951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvcwn,Uid:2bd04426-239c-4bb0-b854-c99ed0496ddc,Namespace:kube-system,Attempt:0,}" Oct 2 19:27:16.970609 env[1740]: time="2023-10-02T19:27:16.970542513Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:16.974866 env[1740]: time="2023-10-02T19:27:16.974784207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:16.978679 env[1740]: time="2023-10-02T19:27:16.978612080Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:16.981467 env[1740]: time="2023-10-02T19:27:16.981393508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:16.983396 env[1740]: time="2023-10-02T19:27:16.983336439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:16.988204 env[1740]: time="2023-10-02T19:27:16.988143995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:16.992571 env[1740]: time="2023-10-02T19:27:16.992515953Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:16.995117 env[1740]: time="2023-10-02T19:27:16.995065526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:17.024671 kubelet[2196]: E1002 19:27:17.024581 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:17.048009 env[1740]: time="2023-10-02T19:27:17.047887534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:27:17.048237 env[1740]: time="2023-10-02T19:27:17.048082364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:27:17.048237 env[1740]: time="2023-10-02T19:27:17.048139846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:27:17.048237 env[1740]: time="2023-10-02T19:27:17.048165436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:27:17.048589 env[1740]: time="2023-10-02T19:27:17.048505231Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fb4fc316f057796b3ad8560cab9ed857d3f366a60842f70f292d1b106c9eefe pid=2253 runtime=io.containerd.runc.v2 Oct 2 19:27:17.049006 env[1740]: time="2023-10-02T19:27:17.048922992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:27:17.049300 env[1740]: time="2023-10-02T19:27:17.049233957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:27:17.050768 env[1740]: time="2023-10-02T19:27:17.050625472Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb pid=2257 runtime=io.containerd.runc.v2 Oct 2 19:27:17.088781 systemd[1]: Started cri-containerd-2fb4fc316f057796b3ad8560cab9ed857d3f366a60842f70f292d1b106c9eefe.scope. Oct 2 19:27:17.125464 systemd[1]: Started cri-containerd-a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb.scope. Oct 2 19:27:17.148000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.148000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.148000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.148000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.148000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.148000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.148000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.148000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.148000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.150000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.150000 audit: BPF prog-id=76 op=LOAD Oct 2 19:27:17.151000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.151000 audit[2276]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2253 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:17.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266623466633331366630353737393662336164383536306361623965 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit[2276]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2253 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:17.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266623466633331366630353737393662336164383536306361623965 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.152000 audit: BPF prog-id=77 op=LOAD Oct 2 19:27:17.152000 audit[2276]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2253 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:17.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266623466633331366630353737393662336164383536306361623965 Oct 2 19:27:17.155000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.155000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.155000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.155000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.155000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.155000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.155000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.155000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.155000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.155000 audit: BPF prog-id=78 op=LOAD Oct 2 19:27:17.155000 audit[2276]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2253 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:17.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266623466633331366630353737393662336164383536306361623965 Oct 2 19:27:17.157000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:27:17.157000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:27:17.157000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.157000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.157000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.157000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.157000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.157000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.157000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.157000 audit[2276]: AVC avc: denied { perfmon } for pid=2276 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.157000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.157000 audit[2276]: AVC avc: denied { bpf } for pid=2276 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.157000 audit: BPF prog-id=79 op=LOAD Oct 2 19:27:17.157000 audit[2276]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2253 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:17.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266623466633331366630353737393662336164383536306361623965 Oct 2 19:27:17.169000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.169000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.169000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.169000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.169000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.169000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.169000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.169000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.169000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.169000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.169000 audit: BPF prog-id=80 op=LOAD Oct 2 19:27:17.171000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.171000 audit[2278]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2257 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:17.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130333638373566326534366161303063343766663637386532363538 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit[2278]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2257 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:17.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130333638373566326534366161303063343766663637386532363538 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.172000 audit: BPF prog-id=81 op=LOAD Oct 2 19:27:17.172000 audit[2278]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2257 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:17.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130333638373566326534366161303063343766663637386532363538 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit: BPF prog-id=82 op=LOAD Oct 2 19:27:17.181000 audit[2278]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2257 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:17.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130333638373566326534366161303063343766663637386532363538 Oct 2 19:27:17.181000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:27:17.181000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { perfmon } for pid=2278 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit[2278]: AVC avc: denied { bpf } for pid=2278 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:17.181000 audit: BPF prog-id=83 op=LOAD Oct 2 19:27:17.181000 audit[2278]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2257 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:17.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130333638373566326534366161303063343766663637386532363538 Oct 2 19:27:17.217614 env[1740]: time="2023-10-02T19:27:17.217519150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lhmh2,Uid:5602b131-7754-45d2-9a03-db3aa863c2fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fb4fc316f057796b3ad8560cab9ed857d3f366a60842f70f292d1b106c9eefe\"" Oct 2 19:27:17.233303 env[1740]: time="2023-10-02T19:27:17.221771375Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\"" Oct 2 19:27:17.232548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123112176.mount: Deactivated successfully. Oct 2 19:27:17.235031 env[1740]: time="2023-10-02T19:27:17.234932535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvcwn,Uid:2bd04426-239c-4bb0-b854-c99ed0496ddc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\"" Oct 2 19:27:18.025568 kubelet[2196]: E1002 19:27:18.025444 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:18.574513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542240025.mount: Deactivated successfully. Oct 2 19:27:19.026771 kubelet[2196]: E1002 19:27:19.026656 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:19.383776 env[1740]: time="2023-10-02T19:27:19.383437005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:19.386934 env[1740]: time="2023-10-02T19:27:19.386879853Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:19.390604 env[1740]: time="2023-10-02T19:27:19.390522168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:19.394296 env[1740]: time="2023-10-02T19:27:19.394226536Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:19.395864 env[1740]: time="2023-10-02T19:27:19.395799171Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\" returns image reference \"sha256:7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa\"" Oct 2 19:27:19.399638 env[1740]: time="2023-10-02T19:27:19.399580108Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:27:19.401608 env[1740]: time="2023-10-02T19:27:19.401514684Z" level=info msg="CreateContainer within sandbox \"2fb4fc316f057796b3ad8560cab9ed857d3f366a60842f70f292d1b106c9eefe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:27:19.422217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244294098.mount: Deactivated successfully. Oct 2 19:27:19.432273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955664624.mount: Deactivated successfully. Oct 2 19:27:19.440961 env[1740]: time="2023-10-02T19:27:19.440902186Z" level=info msg="CreateContainer within sandbox \"2fb4fc316f057796b3ad8560cab9ed857d3f366a60842f70f292d1b106c9eefe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c5485eb01e05f72bf2795958b1e0570a072bdc7c15aed6ae32c8d95217d42aeb\"" Oct 2 19:27:19.442574 env[1740]: time="2023-10-02T19:27:19.442524655Z" level=info msg="StartContainer for \"c5485eb01e05f72bf2795958b1e0570a072bdc7c15aed6ae32c8d95217d42aeb\"" Oct 2 19:27:19.485982 systemd[1]: Started cri-containerd-c5485eb01e05f72bf2795958b1e0570a072bdc7c15aed6ae32c8d95217d42aeb.scope. Oct 2 19:27:19.537000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.540627 kernel: kauditd_printk_skb: 157 callbacks suppressed Oct 2 19:27:19.540785 kernel: audit: type=1400 audit(1696274839.537:660): avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.537000 audit[2334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2253 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.559233 kernel: audit: type=1300 audit(1696274839.537:660): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2253 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.559354 kernel: audit: type=1327 audit(1696274839.537:660): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335343835656230316530356637326266323739353935386231653035 Oct 2 19:27:19.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335343835656230316530356637326266323739353935386231653035 Oct 2 19:27:19.538000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.577006 kernel: audit: type=1400 audit(1696274839.538:661): avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.538000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.589539 kernel: audit: type=1400 audit(1696274839.538:661): avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.601190 kernel: audit: type=1400 audit(1696274839.538:661): avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.538000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.538000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.538000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.609843 kernel: audit: type=1400 audit(1696274839.538:661): avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.538000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.617841 kernel: audit: type=1400 audit(1696274839.538:661): avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.538000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.635510 kernel: audit: type=1400 audit(1696274839.538:661): avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.635641 kernel: audit: type=1400 audit(1696274839.538:661): avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.538000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.538000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.538000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.538000 audit: BPF prog-id=84 op=LOAD Oct 2 19:27:19.538000 audit[2334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2253 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.638174 env[1740]: time="2023-10-02T19:27:19.638086507Z" level=info msg="StartContainer for \"c5485eb01e05f72bf2795958b1e0570a072bdc7c15aed6ae32c8d95217d42aeb\" returns successfully" Oct 2 19:27:19.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335343835656230316530356637326266323739353935386231653035 Oct 2 19:27:19.540000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.540000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.540000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.540000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.540000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.540000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.540000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.540000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.540000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.540000 audit: BPF prog-id=85 op=LOAD Oct 2 19:27:19.540000 audit[2334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2253 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335343835656230316530356637326266323739353935386231653035 Oct 2 19:27:19.551000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:27:19.551000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:27:19.551000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.551000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.551000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.551000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.551000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.551000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.551000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.551000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.551000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.551000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:19.551000 audit: BPF prog-id=86 op=LOAD Oct 2 19:27:19.551000 audit[2334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2253 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335343835656230316530356637326266323739353935386231653035 Oct 2 19:27:19.802000 audit[2385]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.802000 audit[2385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc7fa810 a2=0 a3=ffff980ce6c0 items=0 ppid=2345 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:27:19.804000 audit[2386]: NETFILTER_CFG table=mangle:15 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:19.804000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe042e3d0 a2=0 a3=ffffa74336c0 items=0 ppid=2345 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:27:19.806000 audit[2387]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.806000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6bbad60 a2=0 a3=ffff98c1f6c0 items=0 ppid=2345 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:27:19.808000 audit[2388]: NETFILTER_CFG table=nat:17 family=10 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:19.808000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf2b08d0 a2=0 a3=ffffac3ff6c0 items=0 ppid=2345 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:27:19.809000 audit[2389]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.809000 audit[2389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb0158c0 a2=0 a3=ffff89c3a6c0 items=0 ppid=2345 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:27:19.814000 audit[2390]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:19.814000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc2aeae0 a2=0 a3=ffffb7b256c0 items=0 ppid=2345 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:27:19.915000 audit[2391]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.915000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffec792420 a2=0 a3=ffff88b1f6c0 items=0 ppid=2345 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:27:19.924000 audit[2393]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.924000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffa091d40 a2=0 a3=ffffacb5f6c0 items=0 ppid=2345 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.924000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:27:19.939000 audit[2396]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.939000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe978bd80 a2=0 a3=ffff91f096c0 items=0 ppid=2345 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:27:19.943000 audit[2397]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.943000 audit[2397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb550a80 a2=0 a3=ffff95cab6c0 items=0 ppid=2345 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:27:19.951000 audit[2399]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.951000 audit[2399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff3edc3d0 a2=0 a3=ffffade9e6c0 items=0 ppid=2345 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:27:19.955000 audit[2400]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.955000 audit[2400]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe38bdff0 a2=0 a3=ffff98dfa6c0 items=0 ppid=2345 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:27:19.968000 audit[2402]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.968000 audit[2402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffed2d9830 a2=0 a3=ffffb61da6c0 items=0 ppid=2345 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:27:19.979000 audit[2405]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.979000 audit[2405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffc3f65a0 a2=0 a3=ffff8869f6c0 items=0 ppid=2345 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:27:19.983000 audit[2406]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.983000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6ee6090 a2=0 a3=ffffbc4eb6c0 items=0 ppid=2345 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:27:19.991000 audit[2408]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.991000 audit[2408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc332cdc0 a2=0 a3=ffff800f86c0 items=0 ppid=2345 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.991000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:27:19.996000 audit[2409]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:19.996000 audit[2409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff41c96b0 a2=0 a3=ffff90e0f6c0 items=0 ppid=2345 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:19.996000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:27:20.005000 audit[2411]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=2411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:20.005000 audit[2411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff462eda0 a2=0 a3=ffffb62fd6c0 items=0 ppid=2345 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:27:20.018000 audit[2414]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:20.018000 audit[2414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcdf12dc0 a2=0 a3=ffff8d81b6c0 items=0 ppid=2345 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:27:20.027715 kubelet[2196]: E1002 19:27:20.027624 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:20.031000 audit[2417]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:20.031000 audit[2417]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd49ed820 a2=0 a3=ffffa92686c0 items=0 ppid=2345 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:27:20.037000 audit[2418]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:20.037000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee2781b0 a2=0 a3=ffffa38ec6c0 items=0 ppid=2345 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:27:20.047000 audit[2420]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:20.047000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffffbd989c0 a2=0 a3=ffffa3c8b6c0 items=0 ppid=2345 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.047000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:20.086000 audit[2426]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:20.086000 audit[2426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc01c6580 a2=0 a3=ffffb74bb6c0 items=0 ppid=2345 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:20.090000 audit[2427]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:20.090000 audit[2427]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff52d2dd0 a2=0 a3=ffffa13eb6c0 items=0 ppid=2345 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:27:20.098000 audit[2429]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:20.098000 audit[2429]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe9a18ce0 a2=0 a3=ffff86fd66c0 items=0 ppid=2345 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:27:20.142000 audit[2435]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:27:20.142000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4956 a0=3 a1=ffffe8481580 a2=0 a3=ffff955076c0 items=0 ppid=2345 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:20.181000 audit[2435]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:27:20.181000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe8481580 a2=0 a3=ffff955076c0 items=0 ppid=2345 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:20.188000 audit[2441]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.188000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff79d7bf0 a2=0 a3=ffff9d7126c0 items=0 ppid=2345 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:27:20.197000 audit[2443]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.197000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd72d3d20 a2=0 a3=ffff8c9016c0 items=0 ppid=2345 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:27:20.210000 audit[2446]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.210000 audit[2446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc7cc0600 a2=0 a3=ffffa78626c0 items=0 ppid=2345 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:27:20.214000 audit[2447]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.214000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe14ac8a0 a2=0 a3=ffffa1c216c0 items=0 ppid=2345 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.214000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:27:20.223000 audit[2449]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.223000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffec5cad70 a2=0 a3=ffff86e846c0 items=0 ppid=2345 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:27:20.227000 audit[2450]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.227000 audit[2450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf2bc770 a2=0 a3=ffff801896c0 items=0 ppid=2345 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:27:20.236000 audit[2452]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.236000 audit[2452]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffe03c4f0 a2=0 a3=ffffaa2816c0 items=0 ppid=2345 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:27:20.248000 audit[2455]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.248000 audit[2455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffcd2982f0 a2=0 a3=ffff8042d6c0 items=0 ppid=2345 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.248000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:27:20.252000 audit[2456]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.252000 audit[2456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2a04490 a2=0 a3=ffffa5e3e6c0 items=0 ppid=2345 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:27:20.260000 audit[2458]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.260000 audit[2458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffec3d850 a2=0 a3=ffff9474c6c0 items=0 ppid=2345 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.260000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:27:20.264000 audit[2459]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.264000 audit[2459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeebbac80 a2=0 a3=ffff8fa736c0 items=0 ppid=2345 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:27:20.277000 audit[2461]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.277000 audit[2461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdcc51ad0 a2=0 a3=ffff8fdc26c0 items=0 ppid=2345 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:27:20.290000 audit[2464]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.290000 audit[2464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebe7cab0 a2=0 a3=ffffacf576c0 items=0 ppid=2345 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.290000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:27:20.303000 audit[2467]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.303000 audit[2467]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd9faea60 a2=0 a3=ffffb1f5b6c0 items=0 ppid=2345 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.303000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:27:20.308000 audit[2468]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.308000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1d89a70 a2=0 a3=ffffb0b306c0 items=0 ppid=2345 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.308000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:27:20.317000 audit[2470]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.317000 audit[2470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc1f0b6e0 a2=0 a3=ffff8af296c0 items=0 ppid=2345 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:20.331000 audit[2473]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.331000 audit[2473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffffe502d40 a2=0 a3=ffff9ff5a6c0 items=0 ppid=2345 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:20.336000 audit[2474]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.336000 audit[2474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff08250d0 a2=0 a3=ffffa1e8c6c0 items=0 ppid=2345 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:27:20.345000 audit[2476]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.345000 audit[2476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffc496c30 a2=0 a3=ffffbe9a26c0 items=0 ppid=2345 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:27:20.351000 audit[2477]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.351000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea573cd0 a2=0 a3=ffff82e356c0 items=0 ppid=2345 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:27:20.361000 audit[2479]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.361000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffeecfeae0 a2=0 a3=ffff95fa36c0 items=0 ppid=2345 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:27:20.384000 audit[2483]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:20.384000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd5df72e0 a2=0 a3=ffffba4b86c0 items=0 ppid=2345 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:27:20.393000 audit[2485]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:27:20.393000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffffdfc3e0 a2=0 a3=ffffbdaeb6c0 items=0 ppid=2345 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.393000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:20.395000 audit[2485]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:27:20.395000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffffdfc3e0 a2=0 a3=ffffbdaeb6c0 items=0 ppid=2345 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:20.395000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:21.028480 kubelet[2196]: E1002 19:27:21.028413 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:22.029663 kubelet[2196]: E1002 19:27:22.029568 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:23.030790 kubelet[2196]: E1002 19:27:23.030732 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:24.031795 kubelet[2196]: E1002 19:27:24.031731 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:25.032067 kubelet[2196]: E1002 19:27:25.031986 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:26.032690 kubelet[2196]: E1002 19:27:26.032638 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:26.665741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147034214.mount: Deactivated successfully. Oct 2 19:27:27.033882 kubelet[2196]: E1002 19:27:27.033796 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:28.034902 kubelet[2196]: E1002 19:27:28.034844 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:28.825750 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:27:28.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:28.829266 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 2 19:27:28.829367 kernel: audit: type=1131 audit(1696274848.824:717): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:28.850000 audit: BPF prog-id=61 op=UNLOAD Oct 2 19:27:28.857194 kernel: audit: type=1334 audit(1696274848.850:718): prog-id=61 op=UNLOAD Oct 2 19:27:28.857283 kernel: audit: type=1334 audit(1696274848.850:719): prog-id=60 op=UNLOAD Oct 2 19:27:28.850000 audit: BPF prog-id=60 op=UNLOAD Oct 2 19:27:28.860233 kernel: audit: type=1334 audit(1696274848.850:720): prog-id=59 op=UNLOAD Oct 2 19:27:28.850000 audit: BPF prog-id=59 op=UNLOAD Oct 2 19:27:29.036283 kubelet[2196]: E1002 19:27:29.036171 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:30.037330 kubelet[2196]: E1002 19:27:30.037224 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:30.590877 env[1740]: time="2023-10-02T19:27:30.590824299Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:30.593773 env[1740]: time="2023-10-02T19:27:30.593653104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:30.603120 env[1740]: time="2023-10-02T19:27:30.603037249Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:30.606135 env[1740]: time="2023-10-02T19:27:30.606069407Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 2 19:27:30.610669 env[1740]: time="2023-10-02T19:27:30.610549760Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:27:30.627316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3595995863.mount: Deactivated successfully. Oct 2 19:27:30.638981 env[1740]: time="2023-10-02T19:27:30.638892533Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\"" Oct 2 19:27:30.639754 env[1740]: time="2023-10-02T19:27:30.639595059Z" level=info msg="StartContainer for \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\"" Oct 2 19:27:30.691580 systemd[1]: Started cri-containerd-79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff.scope. Oct 2 19:27:30.726199 systemd[1]: cri-containerd-79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff.scope: Deactivated successfully. Oct 2 19:27:31.037555 kubelet[2196]: E1002 19:27:31.037503 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:31.401784 env[1740]: time="2023-10-02T19:27:31.401601651Z" level=info msg="shim disconnected" id=79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff Oct 2 19:27:31.402013 env[1740]: time="2023-10-02T19:27:31.401979106Z" level=warning msg="cleaning up after shim disconnected" id=79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff namespace=k8s.io Oct 2 19:27:31.402161 env[1740]: time="2023-10-02T19:27:31.402132842Z" level=info msg="cleaning up dead shim" Oct 2 19:27:31.428321 env[1740]: time="2023-10-02T19:27:31.428257214Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2513 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:27:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:27:31.429110 env[1740]: time="2023-10-02T19:27:31.428969893Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 19:27:31.431858 env[1740]: time="2023-10-02T19:27:31.429481697Z" level=error msg="Failed to pipe stdout of container \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\"" error="reading from a closed fifo" Oct 2 19:27:31.432087 env[1740]: time="2023-10-02T19:27:31.431764756Z" level=error msg="Failed to pipe stderr of container \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\"" error="reading from a closed fifo" Oct 2 19:27:31.434579 env[1740]: time="2023-10-02T19:27:31.434488663Z" level=error msg="StartContainer for \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:27:31.435052 kubelet[2196]: E1002 19:27:31.435007 2196 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff" Oct 2 19:27:31.435577 kubelet[2196]: E1002 19:27:31.435533 2196 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:27:31.435577 kubelet[2196]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:27:31.435577 kubelet[2196]: rm /hostbin/cilium-mount Oct 2 19:27:31.435804 kubelet[2196]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x8lpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:27:31.435804 kubelet[2196]: E1002 19:27:31.435631 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:27:31.623274 systemd[1]: run-containerd-runc-k8s.io-79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff-runc.cw1lV6.mount: Deactivated successfully. Oct 2 19:27:31.623448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff-rootfs.mount: Deactivated successfully. Oct 2 19:27:32.038537 kubelet[2196]: E1002 19:27:32.038466 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:32.372101 env[1740]: time="2023-10-02T19:27:32.371572916Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:27:32.391953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801451923.mount: Deactivated successfully. Oct 2 19:27:32.395187 kubelet[2196]: I1002 19:27:32.395137 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lhmh2" podStartSLOduration=15.217630507 podCreationTimestamp="2023-10-02 19:27:15 +0000 UTC" firstStartedPulling="2023-10-02 19:27:17.220287042 +0000 UTC m=+4.851236076" lastFinishedPulling="2023-10-02 19:27:19.397733232 +0000 UTC m=+7.028682338" observedRunningTime="2023-10-02 19:27:20.353611408 +0000 UTC m=+7.984560454" watchObservedRunningTime="2023-10-02 19:27:32.395076769 +0000 UTC m=+20.026025803" Oct 2 19:27:32.409376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622162597.mount: Deactivated successfully. Oct 2 19:27:32.413126 env[1740]: time="2023-10-02T19:27:32.413045907Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\"" Oct 2 19:27:32.416528 env[1740]: time="2023-10-02T19:27:32.416458361Z" level=info msg="StartContainer for \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\"" Oct 2 19:27:32.461802 systemd[1]: Started cri-containerd-5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410.scope. Oct 2 19:27:32.497635 systemd[1]: cri-containerd-5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410.scope: Deactivated successfully. Oct 2 19:27:32.516669 env[1740]: time="2023-10-02T19:27:32.516597242Z" level=info msg="shim disconnected" id=5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410 Oct 2 19:27:32.517136 env[1740]: time="2023-10-02T19:27:32.517099433Z" level=warning msg="cleaning up after shim disconnected" id=5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410 namespace=k8s.io Oct 2 19:27:32.517264 env[1740]: time="2023-10-02T19:27:32.517235796Z" level=info msg="cleaning up dead shim" Oct 2 19:27:32.543451 env[1740]: time="2023-10-02T19:27:32.543390763Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2552 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:27:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:27:32.544189 env[1740]: time="2023-10-02T19:27:32.544108975Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 19:27:32.544454 env[1740]: time="2023-10-02T19:27:32.544403260Z" level=error msg="Failed to pipe stdout of container \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\"" error="reading from a closed fifo" Oct 2 19:27:32.544727 env[1740]: time="2023-10-02T19:27:32.544628956Z" level=error msg="Failed to pipe stderr of container \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\"" error="reading from a closed fifo" Oct 2 19:27:32.548708 env[1740]: time="2023-10-02T19:27:32.548584607Z" level=error msg="StartContainer for \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:27:32.549125 kubelet[2196]: E1002 19:27:32.549073 2196 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410" Oct 2 19:27:32.549290 kubelet[2196]: E1002 19:27:32.549240 2196 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:27:32.549290 kubelet[2196]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:27:32.549290 kubelet[2196]: rm /hostbin/cilium-mount Oct 2 19:27:32.549290 kubelet[2196]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x8lpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:27:32.549714 kubelet[2196]: E1002 19:27:32.549305 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:27:33.039430 kubelet[2196]: E1002 19:27:33.039374 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:33.373061 kubelet[2196]: I1002 19:27:33.372942 2196 scope.go:117] "RemoveContainer" containerID="79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff" Oct 2 19:27:33.373836 kubelet[2196]: I1002 19:27:33.373801 2196 scope.go:117] "RemoveContainer" containerID="79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff" Oct 2 19:27:33.377254 env[1740]: time="2023-10-02T19:27:33.377202745Z" level=info msg="RemoveContainer for \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\"" Oct 2 19:27:33.379302 env[1740]: time="2023-10-02T19:27:33.379244882Z" level=info msg="RemoveContainer for \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\"" Oct 2 19:27:33.379496 env[1740]: time="2023-10-02T19:27:33.379405562Z" level=error msg="RemoveContainer for \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\" failed" error="failed to set removing state for container \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\": container is already in removing state" Oct 2 19:27:33.379795 kubelet[2196]: E1002 19:27:33.379753 2196 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\": container is already in removing state" containerID="79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff" Oct 2 19:27:33.379932 kubelet[2196]: E1002 19:27:33.379836 2196 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff": container is already in removing state; Skipping pod "cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)" Oct 2 19:27:33.380333 kubelet[2196]: E1002 19:27:33.380296 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:27:33.382609 env[1740]: time="2023-10-02T19:27:33.382556507Z" level=info msg="RemoveContainer for \"79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff\" returns successfully" Oct 2 19:27:34.021173 kubelet[2196]: E1002 19:27:34.021098 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:34.039754 kubelet[2196]: E1002 19:27:34.039645 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:34.378338 kubelet[2196]: E1002 19:27:34.378204 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:27:34.509489 kubelet[2196]: W1002 19:27:34.509426 2196 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd04426_239c_4bb0_b854_c99ed0496ddc.slice/cri-containerd-79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff.scope WatchSource:0}: container "79400d02dbf12a3e9c6ee20bf323b2d0109e754d88f7497ec2e0c2585c496eff" in namespace "k8s.io": not found Oct 2 19:27:35.041203 kubelet[2196]: E1002 19:27:35.041129 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:36.041645 kubelet[2196]: E1002 19:27:36.041611 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:37.042937 kubelet[2196]: E1002 19:27:37.042873 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:37.618664 kubelet[2196]: W1002 19:27:37.618619 2196 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd04426_239c_4bb0_b854_c99ed0496ddc.slice/cri-containerd-5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410.scope WatchSource:0}: task 5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410 not found: not found Oct 2 19:27:38.044056 kubelet[2196]: E1002 19:27:38.044012 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:39.045410 kubelet[2196]: E1002 19:27:39.045366 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:40.046761 kubelet[2196]: E1002 19:27:40.046670 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:41.048192 kubelet[2196]: E1002 19:27:41.048119 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:42.048639 kubelet[2196]: E1002 19:27:42.048593 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:43.049594 kubelet[2196]: E1002 19:27:43.049526 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:43.377485 update_engine[1724]: I1002 19:27:43.377066 1724 update_attempter.cc:505] Updating boot flags... Oct 2 19:27:44.049904 kubelet[2196]: E1002 19:27:44.049772 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:45.050278 kubelet[2196]: E1002 19:27:45.050232 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:46.052145 kubelet[2196]: E1002 19:27:46.052073 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:47.053372 kubelet[2196]: E1002 19:27:47.053320 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:47.314921 env[1740]: time="2023-10-02T19:27:47.314465934Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:27:47.333838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921088584.mount: Deactivated successfully. Oct 2 19:27:47.343087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314113905.mount: Deactivated successfully. Oct 2 19:27:47.347257 env[1740]: time="2023-10-02T19:27:47.347174506Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\"" Oct 2 19:27:47.348841 env[1740]: time="2023-10-02T19:27:47.348795413Z" level=info msg="StartContainer for \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\"" Oct 2 19:27:47.396075 systemd[1]: Started cri-containerd-c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1.scope. Oct 2 19:27:47.434678 systemd[1]: cri-containerd-c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1.scope: Deactivated successfully. Oct 2 19:27:47.454624 env[1740]: time="2023-10-02T19:27:47.454557200Z" level=info msg="shim disconnected" id=c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1 Oct 2 19:27:47.454993 env[1740]: time="2023-10-02T19:27:47.454960927Z" level=warning msg="cleaning up after shim disconnected" id=c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1 namespace=k8s.io Oct 2 19:27:47.455136 env[1740]: time="2023-10-02T19:27:47.455108524Z" level=info msg="cleaning up dead shim" Oct 2 19:27:47.486962 env[1740]: time="2023-10-02T19:27:47.486898615Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2776 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:27:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:27:47.487639 env[1740]: time="2023-10-02T19:27:47.487561159Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:27:47.488887 env[1740]: time="2023-10-02T19:27:47.488817297Z" level=error msg="Failed to pipe stdout of container \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\"" error="reading from a closed fifo" Oct 2 19:27:47.489116 env[1740]: time="2023-10-02T19:27:47.489065949Z" level=error msg="Failed to pipe stderr of container \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\"" error="reading from a closed fifo" Oct 2 19:27:47.491496 env[1740]: time="2023-10-02T19:27:47.491428948Z" level=error msg="StartContainer for \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:27:47.492587 kubelet[2196]: E1002 19:27:47.491950 2196 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1" Oct 2 19:27:47.492587 kubelet[2196]: E1002 19:27:47.492090 2196 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:27:47.492587 kubelet[2196]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:27:47.492587 kubelet[2196]: rm /hostbin/cilium-mount Oct 2 19:27:47.492587 kubelet[2196]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x8lpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:27:47.492587 kubelet[2196]: E1002 19:27:47.492155 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:27:48.054833 kubelet[2196]: E1002 19:27:48.054784 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:48.326441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1-rootfs.mount: Deactivated successfully. Oct 2 19:27:48.425602 kubelet[2196]: I1002 19:27:48.425568 2196 scope.go:117] "RemoveContainer" containerID="5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410" Oct 2 19:27:48.426418 kubelet[2196]: I1002 19:27:48.426365 2196 scope.go:117] "RemoveContainer" containerID="5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410" Oct 2 19:27:48.428548 env[1740]: time="2023-10-02T19:27:48.428475571Z" level=info msg="RemoveContainer for \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\"" Oct 2 19:27:48.429350 env[1740]: time="2023-10-02T19:27:48.429282299Z" level=info msg="RemoveContainer for \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\"" Oct 2 19:27:48.429458 env[1740]: time="2023-10-02T19:27:48.429403371Z" level=error msg="RemoveContainer for \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\" failed" error="failed to set removing state for container \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\": container is already in removing state" Oct 2 19:27:48.429918 kubelet[2196]: E1002 19:27:48.429886 2196 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\": container is already in removing state" containerID="5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410" Oct 2 19:27:48.430163 kubelet[2196]: E1002 19:27:48.430137 2196 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410": container is already in removing state; Skipping pod "cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)" Oct 2 19:27:48.431105 kubelet[2196]: E1002 19:27:48.431074 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:27:48.434513 env[1740]: time="2023-10-02T19:27:48.434437480Z" level=info msg="RemoveContainer for \"5d4b0e2a52cd0afa9b790626f891c1f14b3fc5db76e33b3071f965f4af3f8410\" returns successfully" Oct 2 19:27:49.055172 kubelet[2196]: E1002 19:27:49.055129 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:50.056352 kubelet[2196]: E1002 19:27:50.056281 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:50.561816 kubelet[2196]: W1002 19:27:50.561039 2196 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd04426_239c_4bb0_b854_c99ed0496ddc.slice/cri-containerd-c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1.scope WatchSource:0}: task c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1 not found: not found Oct 2 19:27:51.057157 kubelet[2196]: E1002 19:27:51.057114 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:52.057924 kubelet[2196]: E1002 19:27:52.057880 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:53.058800 kubelet[2196]: E1002 19:27:53.058765 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:54.021109 kubelet[2196]: E1002 19:27:54.021048 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:54.059832 kubelet[2196]: E1002 19:27:54.059770 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:55.060171 kubelet[2196]: E1002 19:27:55.060125 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:56.061499 kubelet[2196]: E1002 19:27:56.061457 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:57.063096 kubelet[2196]: E1002 19:27:57.063034 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:58.064087 kubelet[2196]: E1002 19:27:58.064044 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:59.065389 kubelet[2196]: E1002 19:27:59.065319 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:00.066183 kubelet[2196]: E1002 19:28:00.066138 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:01.067159 kubelet[2196]: E1002 19:28:01.067102 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:02.068023 kubelet[2196]: E1002 19:28:02.067956 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:03.069095 kubelet[2196]: E1002 19:28:03.068996 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:03.311431 kubelet[2196]: E1002 19:28:03.311370 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:28:04.070080 kubelet[2196]: E1002 19:28:04.070002 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:05.071719 kubelet[2196]: E1002 19:28:05.071655 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:06.072854 kubelet[2196]: E1002 19:28:06.072813 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:07.073689 kubelet[2196]: E1002 19:28:07.073628 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:08.074367 kubelet[2196]: E1002 19:28:08.074299 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:09.074581 kubelet[2196]: E1002 19:28:09.074509 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:10.075046 kubelet[2196]: E1002 19:28:10.075003 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:11.076175 kubelet[2196]: E1002 19:28:11.076110 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:12.076663 kubelet[2196]: E1002 19:28:12.076585 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:13.077833 kubelet[2196]: E1002 19:28:13.077791 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:14.021057 kubelet[2196]: E1002 19:28:14.020994 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:14.079323 kubelet[2196]: E1002 19:28:14.079278 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:14.315387 env[1740]: time="2023-10-02T19:28:14.314734241Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:28:14.337732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530503965.mount: Deactivated successfully. Oct 2 19:28:14.346919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521913262.mount: Deactivated successfully. Oct 2 19:28:14.352669 env[1740]: time="2023-10-02T19:28:14.352606157Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\"" Oct 2 19:28:14.353969 env[1740]: time="2023-10-02T19:28:14.353917329Z" level=info msg="StartContainer for \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\"" Oct 2 19:28:14.404304 systemd[1]: Started cri-containerd-76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486.scope. Oct 2 19:28:14.441081 systemd[1]: cri-containerd-76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486.scope: Deactivated successfully. Oct 2 19:28:14.460431 env[1740]: time="2023-10-02T19:28:14.460341842Z" level=info msg="shim disconnected" id=76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486 Oct 2 19:28:14.460431 env[1740]: time="2023-10-02T19:28:14.460422606Z" level=warning msg="cleaning up after shim disconnected" id=76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486 namespace=k8s.io Oct 2 19:28:14.460431 env[1740]: time="2023-10-02T19:28:14.460448156Z" level=info msg="cleaning up dead shim" Oct 2 19:28:14.488601 env[1740]: time="2023-10-02T19:28:14.488539937Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:28:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2819 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:28:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:28:14.489320 env[1740]: time="2023-10-02T19:28:14.489242314Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:28:14.489859 env[1740]: time="2023-10-02T19:28:14.489790482Z" level=error msg="Failed to pipe stdout of container \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\"" error="reading from a closed fifo" Oct 2 19:28:14.491862 env[1740]: time="2023-10-02T19:28:14.491793146Z" level=error msg="Failed to pipe stderr of container \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\"" error="reading from a closed fifo" Oct 2 19:28:14.494050 env[1740]: time="2023-10-02T19:28:14.493975412Z" level=error msg="StartContainer for \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:28:14.495082 kubelet[2196]: E1002 19:28:14.494401 2196 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486" Oct 2 19:28:14.495082 kubelet[2196]: E1002 19:28:14.494545 2196 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:28:14.495082 kubelet[2196]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:28:14.495082 kubelet[2196]: rm /hostbin/cilium-mount Oct 2 19:28:14.495082 kubelet[2196]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x8lpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:28:14.495082 kubelet[2196]: E1002 19:28:14.494610 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:28:15.080064 kubelet[2196]: E1002 19:28:15.079987 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:15.327950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486-rootfs.mount: Deactivated successfully. Oct 2 19:28:15.490575 kubelet[2196]: I1002 19:28:15.490525 2196 scope.go:117] "RemoveContainer" containerID="c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1" Oct 2 19:28:15.491183 kubelet[2196]: I1002 19:28:15.491146 2196 scope.go:117] "RemoveContainer" containerID="c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1" Oct 2 19:28:15.494178 env[1740]: time="2023-10-02T19:28:15.494128340Z" level=info msg="RemoveContainer for \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\"" Oct 2 19:28:15.494897 env[1740]: time="2023-10-02T19:28:15.494199264Z" level=info msg="RemoveContainer for \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\"" Oct 2 19:28:15.495167 env[1740]: time="2023-10-02T19:28:15.495105387Z" level=error msg="RemoveContainer for \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\" failed" error="failed to set removing state for container \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\": container is already in removing state" Oct 2 19:28:15.495441 kubelet[2196]: E1002 19:28:15.495408 2196 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\": container is already in removing state" containerID="c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1" Oct 2 19:28:15.495588 kubelet[2196]: I1002 19:28:15.495561 2196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1"} err="rpc error: code = Unknown desc = failed to set removing state for container \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\": container is already in removing state" Oct 2 19:28:15.502053 env[1740]: time="2023-10-02T19:28:15.501974074Z" level=info msg="RemoveContainer for \"c8a18b09497f5039b0c7164d87638b22ec8ef6ba40c41bd7065c24310a57a1f1\" returns successfully" Oct 2 19:28:15.502785 kubelet[2196]: E1002 19:28:15.502748 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:28:16.080550 kubelet[2196]: E1002 19:28:16.080499 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:17.081282 kubelet[2196]: E1002 19:28:17.081239 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:17.567717 kubelet[2196]: W1002 19:28:17.567660 2196 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd04426_239c_4bb0_b854_c99ed0496ddc.slice/cri-containerd-76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486.scope WatchSource:0}: task 76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486 not found: not found Oct 2 19:28:18.082605 kubelet[2196]: E1002 19:28:18.082541 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:19.082757 kubelet[2196]: E1002 19:28:19.082684 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:20.083952 kubelet[2196]: E1002 19:28:20.083885 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:21.084102 kubelet[2196]: E1002 19:28:21.084055 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:22.085574 kubelet[2196]: E1002 19:28:22.085502 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:23.086267 kubelet[2196]: E1002 19:28:23.086225 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:24.087916 kubelet[2196]: E1002 19:28:24.087854 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:25.088634 kubelet[2196]: E1002 19:28:25.088561 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:26.089748 kubelet[2196]: E1002 19:28:26.089673 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:27.090086 kubelet[2196]: E1002 19:28:27.090022 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:28.091133 kubelet[2196]: E1002 19:28:28.091077 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:29.092178 kubelet[2196]: E1002 19:28:29.092113 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:29.311391 kubelet[2196]: E1002 19:28:29.311329 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:28:30.093060 kubelet[2196]: E1002 19:28:30.093013 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:31.094028 kubelet[2196]: E1002 19:28:31.093947 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:32.095557 kubelet[2196]: E1002 19:28:32.095513 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:33.096607 kubelet[2196]: E1002 19:28:33.096571 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:34.021630 kubelet[2196]: E1002 19:28:34.021595 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:34.098240 kubelet[2196]: E1002 19:28:34.098181 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:35.098595 kubelet[2196]: E1002 19:28:35.098551 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:36.100123 kubelet[2196]: E1002 19:28:36.100057 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:37.100899 kubelet[2196]: E1002 19:28:37.100861 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:38.102131 kubelet[2196]: E1002 19:28:38.102069 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:39.102417 kubelet[2196]: E1002 19:28:39.102358 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:40.103059 kubelet[2196]: E1002 19:28:40.103013 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:41.104723 kubelet[2196]: E1002 19:28:41.104660 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:42.104848 kubelet[2196]: E1002 19:28:42.104818 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:43.106062 kubelet[2196]: E1002 19:28:43.106015 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:44.107597 kubelet[2196]: E1002 19:28:44.107479 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:44.311250 kubelet[2196]: E1002 19:28:44.311182 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:28:45.108521 kubelet[2196]: E1002 19:28:45.108455 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:46.109129 kubelet[2196]: E1002 19:28:46.109091 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:47.110732 kubelet[2196]: E1002 19:28:47.110656 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:48.111852 kubelet[2196]: E1002 19:28:48.111789 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:49.112482 kubelet[2196]: E1002 19:28:49.112436 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:50.113952 kubelet[2196]: E1002 19:28:50.113910 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:51.115559 kubelet[2196]: E1002 19:28:51.115523 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:52.116604 kubelet[2196]: E1002 19:28:52.116526 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:53.117227 kubelet[2196]: E1002 19:28:53.117178 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:54.020961 kubelet[2196]: E1002 19:28:54.020894 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:54.118399 kubelet[2196]: E1002 19:28:54.118326 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:55.119407 kubelet[2196]: E1002 19:28:55.119344 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:56.120554 kubelet[2196]: E1002 19:28:56.120508 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:56.314024 env[1740]: time="2023-10-02T19:28:56.313962973Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:28:56.331016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2536656212.mount: Deactivated successfully. Oct 2 19:28:56.342240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934522970.mount: Deactivated successfully. Oct 2 19:28:56.349222 env[1740]: time="2023-10-02T19:28:56.349135364Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\"" Oct 2 19:28:56.350399 env[1740]: time="2023-10-02T19:28:56.350353557Z" level=info msg="StartContainer for \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\"" Oct 2 19:28:56.392540 systemd[1]: Started cri-containerd-f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004.scope. Oct 2 19:28:56.433218 systemd[1]: cri-containerd-f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004.scope: Deactivated successfully. Oct 2 19:28:56.455074 env[1740]: time="2023-10-02T19:28:56.454962319Z" level=info msg="shim disconnected" id=f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004 Oct 2 19:28:56.455339 env[1740]: time="2023-10-02T19:28:56.455067448Z" level=warning msg="cleaning up after shim disconnected" id=f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004 namespace=k8s.io Oct 2 19:28:56.455339 env[1740]: time="2023-10-02T19:28:56.455113622Z" level=info msg="cleaning up dead shim" Oct 2 19:28:56.481131 env[1740]: time="2023-10-02T19:28:56.481055921Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:28:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2858 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:28:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:28:56.481613 env[1740]: time="2023-10-02T19:28:56.481524568Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:28:56.482871 env[1740]: time="2023-10-02T19:28:56.482803000Z" level=error msg="Failed to pipe stdout of container \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\"" error="reading from a closed fifo" Oct 2 19:28:56.482984 env[1740]: time="2023-10-02T19:28:56.482905273Z" level=error msg="Failed to pipe stderr of container \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\"" error="reading from a closed fifo" Oct 2 19:28:56.485505 env[1740]: time="2023-10-02T19:28:56.485433062Z" level=error msg="StartContainer for \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:28:56.486667 kubelet[2196]: E1002 19:28:56.486018 2196 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004" Oct 2 19:28:56.486667 kubelet[2196]: E1002 19:28:56.486164 2196 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:28:56.486667 kubelet[2196]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:28:56.486667 kubelet[2196]: rm /hostbin/cilium-mount Oct 2 19:28:56.486667 kubelet[2196]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x8lpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:28:56.486667 kubelet[2196]: E1002 19:28:56.486242 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:28:56.577853 kubelet[2196]: I1002 19:28:56.577821 2196 scope.go:117] "RemoveContainer" containerID="76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486" Oct 2 19:28:56.578649 kubelet[2196]: I1002 19:28:56.578621 2196 scope.go:117] "RemoveContainer" containerID="76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486" Oct 2 19:28:56.581505 env[1740]: time="2023-10-02T19:28:56.581432617Z" level=info msg="RemoveContainer for \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\"" Oct 2 19:28:56.584043 env[1740]: time="2023-10-02T19:28:56.583989613Z" level=info msg="RemoveContainer for \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\"" Oct 2 19:28:56.584499 env[1740]: time="2023-10-02T19:28:56.584447328Z" level=error msg="RemoveContainer for \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\" failed" error="failed to set removing state for container \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\": container is already in removing state" Oct 2 19:28:56.585023 kubelet[2196]: E1002 19:28:56.584987 2196 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\": container is already in removing state" containerID="76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486" Oct 2 19:28:56.585170 kubelet[2196]: E1002 19:28:56.585050 2196 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486": container is already in removing state; Skipping pod "cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)" Oct 2 19:28:56.585566 kubelet[2196]: E1002 19:28:56.585532 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:28:56.588518 env[1740]: time="2023-10-02T19:28:56.588458240Z" level=info msg="RemoveContainer for \"76e8e191cacb2206dc690cffec0e68d4ba28b24005b6a4eee762e10de81be486\" returns successfully" Oct 2 19:28:57.122328 kubelet[2196]: E1002 19:28:57.122260 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:57.326553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004-rootfs.mount: Deactivated successfully. Oct 2 19:28:58.123020 kubelet[2196]: E1002 19:28:58.122971 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:59.124178 kubelet[2196]: E1002 19:28:59.124108 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:59.561213 kubelet[2196]: W1002 19:28:59.561165 2196 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd04426_239c_4bb0_b854_c99ed0496ddc.slice/cri-containerd-f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004.scope WatchSource:0}: task f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004 not found: not found Oct 2 19:29:00.124712 kubelet[2196]: E1002 19:29:00.124650 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:01.125564 kubelet[2196]: E1002 19:29:01.125501 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:02.126083 kubelet[2196]: E1002 19:29:02.126002 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:03.127836 kubelet[2196]: E1002 19:29:03.127765 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:04.128077 kubelet[2196]: E1002 19:29:04.128042 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:05.129229 kubelet[2196]: E1002 19:29:05.129159 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:06.129890 kubelet[2196]: E1002 19:29:06.129807 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:07.131655 kubelet[2196]: E1002 19:29:07.131580 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:07.311429 kubelet[2196]: E1002 19:29:07.311362 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:29:08.132734 kubelet[2196]: E1002 19:29:08.132666 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:09.133066 kubelet[2196]: E1002 19:29:09.133020 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:10.134276 kubelet[2196]: E1002 19:29:10.134208 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:11.134428 kubelet[2196]: E1002 19:29:11.134381 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:12.135432 kubelet[2196]: E1002 19:29:12.135365 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:13.135546 kubelet[2196]: E1002 19:29:13.135508 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:14.021681 kubelet[2196]: E1002 19:29:14.021620 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:14.136708 kubelet[2196]: E1002 19:29:14.136644 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:14.137772 kubelet[2196]: E1002 19:29:14.137746 2196 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:29:14.252424 kubelet[2196]: E1002 19:29:14.252388 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:15.137259 kubelet[2196]: E1002 19:29:15.137188 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:16.137409 kubelet[2196]: E1002 19:29:16.137338 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:17.137754 kubelet[2196]: E1002 19:29:17.137692 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:18.138521 kubelet[2196]: E1002 19:29:18.138458 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:19.139464 kubelet[2196]: E1002 19:29:19.139399 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:19.254765 kubelet[2196]: E1002 19:29:19.254725 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:20.140512 kubelet[2196]: E1002 19:29:20.140464 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:21.141652 kubelet[2196]: E1002 19:29:21.141608 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:22.142440 kubelet[2196]: E1002 19:29:22.142368 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:22.312888 kubelet[2196]: E1002 19:29:22.312847 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:29:23.143129 kubelet[2196]: E1002 19:29:23.143061 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.143737 kubelet[2196]: E1002 19:29:24.143628 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.255851 kubelet[2196]: E1002 19:29:24.255803 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:25.144378 kubelet[2196]: E1002 19:29:25.144304 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:26.145554 kubelet[2196]: E1002 19:29:26.145505 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:27.147284 kubelet[2196]: E1002 19:29:27.147219 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:28.148235 kubelet[2196]: E1002 19:29:28.148166 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:29.149176 kubelet[2196]: E1002 19:29:29.149102 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:29.257711 kubelet[2196]: E1002 19:29:29.257613 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:30.149963 kubelet[2196]: E1002 19:29:30.149897 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:31.150587 kubelet[2196]: E1002 19:29:31.150544 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:32.151619 kubelet[2196]: E1002 19:29:32.151550 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:33.152776 kubelet[2196]: E1002 19:29:33.152729 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:34.021358 kubelet[2196]: E1002 19:29:34.021291 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:34.154094 kubelet[2196]: E1002 19:29:34.154036 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:34.258744 kubelet[2196]: E1002 19:29:34.258685 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:35.154658 kubelet[2196]: E1002 19:29:35.154610 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:36.155881 kubelet[2196]: E1002 19:29:36.155813 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:36.311635 kubelet[2196]: E1002 19:29:36.311572 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:29:37.156929 kubelet[2196]: E1002 19:29:37.156858 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:38.158007 kubelet[2196]: E1002 19:29:38.157960 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:39.159469 kubelet[2196]: E1002 19:29:39.159420 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:39.259756 kubelet[2196]: E1002 19:29:39.259689 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:40.161062 kubelet[2196]: E1002 19:29:40.161017 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:41.162734 kubelet[2196]: E1002 19:29:41.162678 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:42.164316 kubelet[2196]: E1002 19:29:42.164246 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:43.165054 kubelet[2196]: E1002 19:29:43.164981 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.165943 kubelet[2196]: E1002 19:29:44.165898 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.261235 kubelet[2196]: E1002 19:29:44.261201 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:45.167636 kubelet[2196]: E1002 19:29:45.167568 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:46.168302 kubelet[2196]: E1002 19:29:46.168233 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:47.169259 kubelet[2196]: E1002 19:29:47.169185 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:48.170181 kubelet[2196]: E1002 19:29:48.170119 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:48.312776 kubelet[2196]: E1002 19:29:48.312727 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:29:49.170280 kubelet[2196]: E1002 19:29:49.170234 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:49.262626 kubelet[2196]: E1002 19:29:49.262592 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:50.172085 kubelet[2196]: E1002 19:29:50.172036 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:51.173752 kubelet[2196]: E1002 19:29:51.173678 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:52.174689 kubelet[2196]: E1002 19:29:52.174641 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:53.176265 kubelet[2196]: E1002 19:29:53.176219 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:54.021016 kubelet[2196]: E1002 19:29:54.020950 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:54.177922 kubelet[2196]: E1002 19:29:54.177797 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:54.263773 kubelet[2196]: E1002 19:29:54.263740 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:55.178188 kubelet[2196]: E1002 19:29:55.178139 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:56.179933 kubelet[2196]: E1002 19:29:56.179882 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:57.181078 kubelet[2196]: E1002 19:29:57.181032 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:58.182464 kubelet[2196]: E1002 19:29:58.182418 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:59.183485 kubelet[2196]: E1002 19:29:59.183440 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:59.265629 kubelet[2196]: E1002 19:29:59.265595 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:00.184845 kubelet[2196]: E1002 19:30:00.184776 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:01.184952 kubelet[2196]: E1002 19:30:01.184900 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:02.185712 kubelet[2196]: E1002 19:30:02.185636 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:02.314649 kubelet[2196]: E1002 19:30:02.314591 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:30:03.186052 kubelet[2196]: E1002 19:30:03.185985 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.186681 kubelet[2196]: E1002 19:30:04.186633 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.267548 kubelet[2196]: E1002 19:30:04.267496 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:05.187792 kubelet[2196]: E1002 19:30:05.187746 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:06.188567 kubelet[2196]: E1002 19:30:06.188492 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:07.189854 kubelet[2196]: E1002 19:30:07.189786 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:08.190153 kubelet[2196]: E1002 19:30:08.190085 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:09.191274 kubelet[2196]: E1002 19:30:09.191199 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:09.268611 kubelet[2196]: E1002 19:30:09.268578 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:10.191370 kubelet[2196]: E1002 19:30:10.191324 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:11.192730 kubelet[2196]: E1002 19:30:11.192652 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:12.194102 kubelet[2196]: E1002 19:30:12.194034 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:13.195007 kubelet[2196]: E1002 19:30:13.194934 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:13.310770 kubelet[2196]: E1002 19:30:13.310731 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:30:14.021269 kubelet[2196]: E1002 19:30:14.021216 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:14.195171 kubelet[2196]: E1002 19:30:14.195128 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:14.269825 kubelet[2196]: E1002 19:30:14.269792 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:15.196669 kubelet[2196]: E1002 19:30:15.196623 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:16.197987 kubelet[2196]: E1002 19:30:16.197925 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:17.198860 kubelet[2196]: E1002 19:30:17.198790 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:18.199511 kubelet[2196]: E1002 19:30:18.199456 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:19.201043 kubelet[2196]: E1002 19:30:19.200999 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:19.271581 kubelet[2196]: E1002 19:30:19.271542 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:20.202229 kubelet[2196]: E1002 19:30:20.202185 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:21.203923 kubelet[2196]: E1002 19:30:21.203861 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:22.204885 kubelet[2196]: E1002 19:30:22.204809 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:23.206105 kubelet[2196]: E1002 19:30:23.206058 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:24.207137 kubelet[2196]: E1002 19:30:24.207068 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:24.272678 kubelet[2196]: E1002 19:30:24.272633 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:25.207998 kubelet[2196]: E1002 19:30:25.207901 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:26.208588 kubelet[2196]: E1002 19:30:26.208520 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:27.210163 kubelet[2196]: E1002 19:30:27.210100 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:27.314269 env[1740]: time="2023-10-02T19:30:27.314212382Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:30:27.332680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644869605.mount: Deactivated successfully. Oct 2 19:30:27.342798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1958181322.mount: Deactivated successfully. Oct 2 19:30:27.350809 env[1740]: time="2023-10-02T19:30:27.350746967Z" level=info msg="CreateContainer within sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10\"" Oct 2 19:30:27.352157 env[1740]: time="2023-10-02T19:30:27.352092606Z" level=info msg="StartContainer for \"72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10\"" Oct 2 19:30:27.398754 systemd[1]: Started cri-containerd-72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10.scope. Oct 2 19:30:27.438031 systemd[1]: cri-containerd-72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10.scope: Deactivated successfully. Oct 2 19:30:27.457389 env[1740]: time="2023-10-02T19:30:27.457303218Z" level=info msg="shim disconnected" id=72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10 Oct 2 19:30:27.457389 env[1740]: time="2023-10-02T19:30:27.457380716Z" level=warning msg="cleaning up after shim disconnected" id=72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10 namespace=k8s.io Oct 2 19:30:27.457794 env[1740]: time="2023-10-02T19:30:27.457404753Z" level=info msg="cleaning up dead shim" Oct 2 19:30:27.482814 env[1740]: time="2023-10-02T19:30:27.482732785Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2906 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:27.483461 env[1740]: time="2023-10-02T19:30:27.483370432Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:30:27.484837 env[1740]: time="2023-10-02T19:30:27.484782480Z" level=error msg="Failed to pipe stdout of container \"72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10\"" error="reading from a closed fifo" Oct 2 19:30:27.485381 env[1740]: time="2023-10-02T19:30:27.484997249Z" level=error msg="Failed to pipe stderr of container \"72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10\"" error="reading from a closed fifo" Oct 2 19:30:27.487483 env[1740]: time="2023-10-02T19:30:27.487413985Z" level=error msg="StartContainer for \"72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:27.488469 kubelet[2196]: E1002 19:30:27.487995 2196 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10" Oct 2 19:30:27.488469 kubelet[2196]: E1002 19:30:27.488371 2196 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:27.488469 kubelet[2196]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:27.488469 kubelet[2196]: rm /hostbin/cilium-mount Oct 2 19:30:27.488469 kubelet[2196]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x8lpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:27.488469 kubelet[2196]: E1002 19:30:27.488435 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:30:27.773600 kubelet[2196]: I1002 19:30:27.772001 2196 scope.go:117] "RemoveContainer" containerID="f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004" Oct 2 19:30:27.773600 kubelet[2196]: I1002 19:30:27.772544 2196 scope.go:117] "RemoveContainer" containerID="f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004" Oct 2 19:30:27.775935 env[1740]: time="2023-10-02T19:30:27.775837777Z" level=info msg="RemoveContainer for \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\"" Oct 2 19:30:27.776321 env[1740]: time="2023-10-02T19:30:27.776279676Z" level=info msg="RemoveContainer for \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\"" Oct 2 19:30:27.776604 env[1740]: time="2023-10-02T19:30:27.776539529Z" level=error msg="RemoveContainer for \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\" failed" error="failed to set removing state for container \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\": container is already in removing state" Oct 2 19:30:27.777475 kubelet[2196]: E1002 19:30:27.777444 2196 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\": container is already in removing state" containerID="f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004" Oct 2 19:30:27.777732 kubelet[2196]: E1002 19:30:27.777671 2196 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004": container is already in removing state; Skipping pod "cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)" Oct 2 19:30:27.778307 kubelet[2196]: E1002 19:30:27.778283 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-lvcwn_kube-system(2bd04426-239c-4bb0-b854-c99ed0496ddc)\"" pod="kube-system/cilium-lvcwn" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" Oct 2 19:30:27.781846 env[1740]: time="2023-10-02T19:30:27.781790886Z" level=info msg="RemoveContainer for \"f5f7a5a9173f6b1505813e241f0af882661aa920d7c978a349b345a89625c004\" returns successfully" Oct 2 19:30:28.211436 kubelet[2196]: E1002 19:30:28.210730 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:28.326293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10-rootfs.mount: Deactivated successfully. Oct 2 19:30:29.210958 kubelet[2196]: E1002 19:30:29.210898 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:29.274548 kubelet[2196]: E1002 19:30:29.274497 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:30.211287 kubelet[2196]: E1002 19:30:30.211217 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:30.563190 kubelet[2196]: W1002 19:30:30.563135 2196 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd04426_239c_4bb0_b854_c99ed0496ddc.slice/cri-containerd-72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10.scope WatchSource:0}: task 72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10 not found: not found Oct 2 19:30:31.212046 kubelet[2196]: E1002 19:30:31.212000 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:31.806553 env[1740]: time="2023-10-02T19:30:31.806477602Z" level=info msg="StopPodSandbox for \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\"" Oct 2 19:30:31.809606 env[1740]: time="2023-10-02T19:30:31.806591412Z" level=info msg="Container to stop \"72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:30:31.808974 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb-shm.mount: Deactivated successfully. Oct 2 19:30:31.826741 systemd[1]: cri-containerd-a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb.scope: Deactivated successfully. Oct 2 19:30:31.826000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:30:31.831940 kernel: audit: type=1334 audit(1696275031.826:721): prog-id=80 op=UNLOAD Oct 2 19:30:31.833000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:30:31.837746 kernel: audit: type=1334 audit(1696275031.833:722): prog-id=83 op=UNLOAD Oct 2 19:30:31.880922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb-rootfs.mount: Deactivated successfully. Oct 2 19:30:31.899537 env[1740]: time="2023-10-02T19:30:31.899459559Z" level=info msg="shim disconnected" id=a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb Oct 2 19:30:31.899848 env[1740]: time="2023-10-02T19:30:31.899533493Z" level=warning msg="cleaning up after shim disconnected" id=a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb namespace=k8s.io Oct 2 19:30:31.899848 env[1740]: time="2023-10-02T19:30:31.899557229Z" level=info msg="cleaning up dead shim" Oct 2 19:30:31.925249 env[1740]: time="2023-10-02T19:30:31.925167926Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2939 runtime=io.containerd.runc.v2\n" Oct 2 19:30:31.925785 env[1740]: time="2023-10-02T19:30:31.925737063Z" level=info msg="TearDown network for sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" successfully" Oct 2 19:30:31.925911 env[1740]: time="2023-10-02T19:30:31.925784284Z" level=info msg="StopPodSandbox for \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" returns successfully" Oct 2 19:30:31.960556 kubelet[2196]: I1002 19:30:31.960501 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2bd04426-239c-4bb0-b854-c99ed0496ddc-clustermesh-secrets\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.960576 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-config-path\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.960623 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2bd04426-239c-4bb0-b854-c99ed0496ddc-hubble-tls\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.960679 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-cgroup\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.960747 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-run\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.960790 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-bpf-maps\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.960828 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-hostproc\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.960866 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-xtables-lock\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.960907 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-host-proc-sys-kernel\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.960945 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-host-proc-sys-net\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.960984 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cni-path\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.961023 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-lib-modules\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.961063 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-etc-cni-netd\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.961194 kubelet[2196]: I1002 19:30:31.961108 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8lpb\" (UniqueName: \"kubernetes.io/projected/2bd04426-239c-4bb0-b854-c99ed0496ddc-kube-api-access-x8lpb\") pod \"2bd04426-239c-4bb0-b854-c99ed0496ddc\" (UID: \"2bd04426-239c-4bb0-b854-c99ed0496ddc\") " Oct 2 19:30:31.962011 kubelet[2196]: I1002 19:30:31.961591 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-hostproc" (OuterVolumeSpecName: "hostproc") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:31.967053 kubelet[2196]: I1002 19:30:31.966977 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:30:31.967542 kubelet[2196]: I1002 19:30:31.967454 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:31.967542 kubelet[2196]: I1002 19:30:31.967521 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:31.967715 kubelet[2196]: I1002 19:30:31.967562 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:31.967715 kubelet[2196]: I1002 19:30:31.967605 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:31.967715 kubelet[2196]: I1002 19:30:31.967648 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:31.967955 kubelet[2196]: I1002 19:30:31.967690 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:31.967955 kubelet[2196]: I1002 19:30:31.967821 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:31.967955 kubelet[2196]: I1002 19:30:31.967860 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cni-path" (OuterVolumeSpecName: "cni-path") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:31.967955 kubelet[2196]: I1002 19:30:31.967899 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:31.977022 kubelet[2196]: I1002 19:30:31.976963 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd04426-239c-4bb0-b854-c99ed0496ddc-kube-api-access-x8lpb" (OuterVolumeSpecName: "kube-api-access-x8lpb") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "kube-api-access-x8lpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:30:31.977103 systemd[1]: var-lib-kubelet-pods-2bd04426\x2d239c\x2d4bb0\x2db854\x2dc99ed0496ddc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx8lpb.mount: Deactivated successfully. Oct 2 19:30:31.982871 systemd[1]: var-lib-kubelet-pods-2bd04426\x2d239c\x2d4bb0\x2db854\x2dc99ed0496ddc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:30:31.985240 kubelet[2196]: I1002 19:30:31.985179 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd04426-239c-4bb0-b854-c99ed0496ddc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:30:31.987673 systemd[1]: var-lib-kubelet-pods-2bd04426\x2d239c\x2d4bb0\x2db854\x2dc99ed0496ddc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:30:31.989965 kubelet[2196]: I1002 19:30:31.989917 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd04426-239c-4bb0-b854-c99ed0496ddc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2bd04426-239c-4bb0-b854-c99ed0496ddc" (UID: "2bd04426-239c-4bb0-b854-c99ed0496ddc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:30:32.063446 kubelet[2196]: I1002 19:30:32.061409 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-cgroup\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.063726 kubelet[2196]: I1002 19:30:32.063679 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-run\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.063878 kubelet[2196]: I1002 19:30:32.063852 2196 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2bd04426-239c-4bb0-b854-c99ed0496ddc-clustermesh-secrets\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.064026 kubelet[2196]: I1002 19:30:32.064007 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bd04426-239c-4bb0-b854-c99ed0496ddc-cilium-config-path\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.064164 kubelet[2196]: I1002 19:30:32.064145 2196 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2bd04426-239c-4bb0-b854-c99ed0496ddc-hubble-tls\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.064312 kubelet[2196]: I1002 19:30:32.064293 2196 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-xtables-lock\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.064464 kubelet[2196]: I1002 19:30:32.064445 2196 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-host-proc-sys-kernel\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.064608 kubelet[2196]: I1002 19:30:32.064589 2196 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-bpf-maps\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.064772 kubelet[2196]: I1002 19:30:32.064754 2196 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-hostproc\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.064909 kubelet[2196]: I1002 19:30:32.064890 2196 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-etc-cni-netd\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.065058 kubelet[2196]: I1002 19:30:32.065039 2196 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x8lpb\" (UniqueName: \"kubernetes.io/projected/2bd04426-239c-4bb0-b854-c99ed0496ddc-kube-api-access-x8lpb\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.065194 kubelet[2196]: I1002 19:30:32.065175 2196 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-host-proc-sys-net\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.065344 kubelet[2196]: I1002 19:30:32.065324 2196 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-cni-path\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.065480 kubelet[2196]: I1002 19:30:32.065461 2196 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bd04426-239c-4bb0-b854-c99ed0496ddc-lib-modules\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:30:32.213023 kubelet[2196]: E1002 19:30:32.212972 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:32.322184 systemd[1]: Removed slice kubepods-burstable-pod2bd04426_239c_4bb0_b854_c99ed0496ddc.slice. Oct 2 19:30:32.786628 kubelet[2196]: I1002 19:30:32.786595 2196 scope.go:117] "RemoveContainer" containerID="72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10" Oct 2 19:30:32.791179 env[1740]: time="2023-10-02T19:30:32.791100283Z" level=info msg="RemoveContainer for \"72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10\"" Oct 2 19:30:32.795197 env[1740]: time="2023-10-02T19:30:32.795132029Z" level=info msg="RemoveContainer for \"72b061bfeffa84adc11dd23f7e44259fed7619f069fa0f2fea6f5904bb8d8f10\" returns successfully" Oct 2 19:30:33.214207 kubelet[2196]: E1002 19:30:33.213666 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:34.021386 kubelet[2196]: E1002 19:30:34.021336 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:34.215673 kubelet[2196]: E1002 19:30:34.215621 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:34.276018 kubelet[2196]: E1002 19:30:34.275543 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:34.314384 kubelet[2196]: I1002 19:30:34.314326 2196 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" path="/var/lib/kubelet/pods/2bd04426-239c-4bb0-b854-c99ed0496ddc/volumes" Oct 2 19:30:35.217036 kubelet[2196]: E1002 19:30:35.216960 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:36.217773 kubelet[2196]: E1002 19:30:36.217663 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:37.000917 kubelet[2196]: I1002 19:30:37.000855 2196 topology_manager.go:215] "Topology Admit Handler" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" podNamespace="kube-system" podName="cilium-hbvfj" Oct 2 19:30:37.001090 kubelet[2196]: E1002 19:30:37.000932 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001090 kubelet[2196]: E1002 19:30:37.000955 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001090 kubelet[2196]: E1002 19:30:37.000975 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001090 kubelet[2196]: E1002 19:30:37.000994 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001090 kubelet[2196]: E1002 19:30:37.001011 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001090 kubelet[2196]: I1002 19:30:37.001046 2196 memory_manager.go:346] "RemoveStaleState removing state" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001090 kubelet[2196]: I1002 19:30:37.001064 2196 memory_manager.go:346] "RemoveStaleState removing state" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001090 kubelet[2196]: I1002 19:30:37.001080 2196 memory_manager.go:346] "RemoveStaleState removing state" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001644 kubelet[2196]: E1002 19:30:37.001111 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001644 kubelet[2196]: I1002 19:30:37.001140 2196 memory_manager.go:346] "RemoveStaleState removing state" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001644 kubelet[2196]: I1002 19:30:37.001347 2196 memory_manager.go:346] "RemoveStaleState removing state" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.001644 kubelet[2196]: I1002 19:30:37.001431 2196 memory_manager.go:346] "RemoveStaleState removing state" podUID="2bd04426-239c-4bb0-b854-c99ed0496ddc" containerName="mount-cgroup" Oct 2 19:30:37.006897 kubelet[2196]: I1002 19:30:37.006849 2196 topology_manager.go:215] "Topology Admit Handler" podUID="186acf19-99fa-4b8b-b891-8babc242df77" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-82hlv" Oct 2 19:30:37.012937 systemd[1]: Created slice kubepods-burstable-podac545ceb_d093_4cae_a8b7_5fbb02efaa26.slice. Oct 2 19:30:37.032068 systemd[1]: Created slice kubepods-besteffort-pod186acf19_99fa_4b8b_b891_8babc242df77.slice. Oct 2 19:30:37.095951 kubelet[2196]: I1002 19:30:37.095889 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-cgroup\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096112 kubelet[2196]: I1002 19:30:37.095987 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-config-path\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096112 kubelet[2196]: I1002 19:30:37.096060 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-hubble-tls\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096275 kubelet[2196]: I1002 19:30:37.096131 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/186acf19-99fa-4b8b-b891-8babc242df77-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-82hlv\" (UID: \"186acf19-99fa-4b8b-b891-8babc242df77\") " pod="kube-system/cilium-operator-6bc8ccdb58-82hlv" Oct 2 19:30:37.096275 kubelet[2196]: I1002 19:30:37.096183 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-hostproc\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096275 kubelet[2196]: I1002 19:30:37.096254 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-etc-cni-netd\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096454 kubelet[2196]: I1002 19:30:37.096323 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-ipsec-secrets\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096454 kubelet[2196]: I1002 19:30:37.096401 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-host-proc-sys-kernel\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096667 kubelet[2196]: I1002 19:30:37.096454 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-run\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096667 kubelet[2196]: I1002 19:30:37.096525 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-bpf-maps\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096667 kubelet[2196]: I1002 19:30:37.096616 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cni-path\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096952 kubelet[2196]: I1002 19:30:37.096688 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-lib-modules\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096952 kubelet[2196]: I1002 19:30:37.096778 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-clustermesh-secrets\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096952 kubelet[2196]: I1002 19:30:37.096847 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grpxb\" (UniqueName: \"kubernetes.io/projected/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-kube-api-access-grpxb\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.096952 kubelet[2196]: I1002 19:30:37.096921 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-xtables-lock\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.097203 kubelet[2196]: I1002 19:30:37.096989 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-host-proc-sys-net\") pod \"cilium-hbvfj\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " pod="kube-system/cilium-hbvfj" Oct 2 19:30:37.097203 kubelet[2196]: I1002 19:30:37.097039 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzmb\" (UniqueName: \"kubernetes.io/projected/186acf19-99fa-4b8b-b891-8babc242df77-kube-api-access-dhzmb\") pod \"cilium-operator-6bc8ccdb58-82hlv\" (UID: \"186acf19-99fa-4b8b-b891-8babc242df77\") " pod="kube-system/cilium-operator-6bc8ccdb58-82hlv" Oct 2 19:30:37.227011 kubelet[2196]: E1002 19:30:37.226969 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:37.329386 env[1740]: time="2023-10-02T19:30:37.328794967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hbvfj,Uid:ac545ceb-d093-4cae-a8b7-5fbb02efaa26,Namespace:kube-system,Attempt:0,}" Oct 2 19:30:37.338262 env[1740]: time="2023-10-02T19:30:37.338206910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-82hlv,Uid:186acf19-99fa-4b8b-b891-8babc242df77,Namespace:kube-system,Attempt:0,}" Oct 2 19:30:37.363203 env[1740]: time="2023-10-02T19:30:37.363077117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:30:37.363411 env[1740]: time="2023-10-02T19:30:37.363156127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:30:37.363411 env[1740]: time="2023-10-02T19:30:37.363183307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:30:37.363867 env[1740]: time="2023-10-02T19:30:37.363780502Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653 pid=2969 runtime=io.containerd.runc.v2 Oct 2 19:30:37.390272 env[1740]: time="2023-10-02T19:30:37.390167472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:30:37.390620 env[1740]: time="2023-10-02T19:30:37.390571222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:30:37.390823 env[1740]: time="2023-10-02T19:30:37.390763142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:30:37.392336 env[1740]: time="2023-10-02T19:30:37.392254370Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1 pid=2991 runtime=io.containerd.runc.v2 Oct 2 19:30:37.399925 systemd[1]: Started cri-containerd-47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653.scope. Oct 2 19:30:37.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.463711 kernel: audit: type=1400 audit(1696275037.443:723): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.463873 kernel: audit: type=1400 audit(1696275037.443:724): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.472455 kernel: audit: type=1400 audit(1696275037.443:725): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.471995 systemd[1]: Started cri-containerd-cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1.scope. Oct 2 19:30:37.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.481010 kernel: audit: type=1400 audit(1696275037.443:726): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.489173 kernel: audit: type=1400 audit(1696275037.443:727): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.489730 kernel: audit: type=1400 audit(1696275037.443:728): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.506802 kernel: audit: type=1400 audit(1696275037.443:729): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.525972 kernel: audit: type=1400 audit(1696275037.443:730): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.526147 kernel: audit: type=1400 audit(1696275037.443:731): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.535216 kernel: audit: type=1400 audit(1696275037.443:732): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.443000 audit: BPF prog-id=87 op=LOAD Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000115b38 a2=10 a3=0 items=0 ppid=2969 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:37.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313836653763333733616537333637313234396330316132323062 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001155a0 a2=3c a3=0 items=0 ppid=2969 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:37.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313836653763333733616537333637313234396330316132323062 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit: BPF prog-id=88 op=LOAD Oct 2 19:30:37.452000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001158e0 a2=78 a3=0 items=0 ppid=2969 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:37.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313836653763333733616537333637313234396330316132323062 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit: BPF prog-id=89 op=LOAD Oct 2 19:30:37.452000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000115670 a2=78 a3=0 items=0 ppid=2969 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:37.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313836653763333733616537333637313234396330316132323062 Oct 2 19:30:37.452000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:30:37.452000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { perfmon } for pid=2979 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit[2979]: AVC avc: denied { bpf } for pid=2979 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.452000 audit: BPF prog-id=90 op=LOAD Oct 2 19:30:37.452000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000115b40 a2=78 a3=0 items=0 ppid=2969 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:37.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313836653763333733616537333637313234396330316132323062 Oct 2 19:30:37.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.515000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.515000 audit: BPF prog-id=91 op=LOAD Oct 2 19:30:37.518000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.518000 audit[3007]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2991 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:37.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623665396533613361343331336530646266356232623036643933 Oct 2 19:30:37.539000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.539000 audit[3007]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2991 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:37.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623665396533613361343331336530646266356232623036643933 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit: BPF prog-id=92 op=LOAD Oct 2 19:30:37.540000 audit[3007]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2991 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:37.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623665396533613361343331336530646266356232623036643933 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit: BPF prog-id=93 op=LOAD Oct 2 19:30:37.540000 audit[3007]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2991 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:37.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623665396533613361343331336530646266356232623036643933 Oct 2 19:30:37.540000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:30:37.540000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { perfmon } for pid=3007 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit[3007]: AVC avc: denied { bpf } for pid=3007 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:37.540000 audit: BPF prog-id=94 op=LOAD Oct 2 19:30:37.540000 audit[3007]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2991 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:37.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623665396533613361343331336530646266356232623036643933 Oct 2 19:30:37.547256 env[1740]: time="2023-10-02T19:30:37.547194537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hbvfj,Uid:ac545ceb-d093-4cae-a8b7-5fbb02efaa26,Namespace:kube-system,Attempt:0,} returns sandbox id \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\"" Oct 2 19:30:37.560524 env[1740]: time="2023-10-02T19:30:37.560459940Z" level=info msg="CreateContainer within sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:30:37.586869 env[1740]: time="2023-10-02T19:30:37.586652530Z" level=info msg="CreateContainer within sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\"" Oct 2 19:30:37.591246 env[1740]: time="2023-10-02T19:30:37.591179985Z" level=info msg="StartContainer for \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\"" Oct 2 19:30:37.612379 env[1740]: time="2023-10-02T19:30:37.612314431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-82hlv,Uid:186acf19-99fa-4b8b-b891-8babc242df77,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\"" Oct 2 19:30:37.615872 env[1740]: time="2023-10-02T19:30:37.615805038Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:30:37.641971 systemd[1]: Started cri-containerd-7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81.scope. Oct 2 19:30:37.682187 systemd[1]: cri-containerd-7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81.scope: Deactivated successfully. Oct 2 19:30:37.713892 env[1740]: time="2023-10-02T19:30:37.713811293Z" level=info msg="shim disconnected" id=7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81 Oct 2 19:30:37.713892 env[1740]: time="2023-10-02T19:30:37.713887795Z" level=warning msg="cleaning up after shim disconnected" id=7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81 namespace=k8s.io Oct 2 19:30:37.714256 env[1740]: time="2023-10-02T19:30:37.713910632Z" level=info msg="cleaning up dead shim" Oct 2 19:30:37.739386 env[1740]: time="2023-10-02T19:30:37.739301842Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3070 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:37.739890 env[1740]: time="2023-10-02T19:30:37.739794742Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:30:37.743853 env[1740]: time="2023-10-02T19:30:37.743775957Z" level=error msg="Failed to pipe stdout of container \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\"" error="reading from a closed fifo" Oct 2 19:30:37.744011 env[1740]: time="2023-10-02T19:30:37.743913972Z" level=error msg="Failed to pipe stderr of container \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\"" error="reading from a closed fifo" Oct 2 19:30:37.746222 env[1740]: time="2023-10-02T19:30:37.746142281Z" level=error msg="StartContainer for \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:37.746592 kubelet[2196]: E1002 19:30:37.746557 2196 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81" Oct 2 19:30:37.746932 kubelet[2196]: E1002 19:30:37.746904 2196 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:37.746932 kubelet[2196]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:37.746932 kubelet[2196]: rm /hostbin/cilium-mount Oct 2 19:30:37.746932 kubelet[2196]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-grpxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:37.747280 kubelet[2196]: E1002 19:30:37.746999 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hbvfj" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" Oct 2 19:30:37.808101 env[1740]: time="2023-10-02T19:30:37.808034315Z" level=info msg="CreateContainer within sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:30:37.828015 env[1740]: time="2023-10-02T19:30:37.827952940Z" level=info msg="CreateContainer within sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\"" Oct 2 19:30:37.829352 env[1740]: time="2023-10-02T19:30:37.829302912Z" level=info msg="StartContainer for \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\"" Oct 2 19:30:37.870850 systemd[1]: Started cri-containerd-e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df.scope. Oct 2 19:30:37.908644 systemd[1]: cri-containerd-e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df.scope: Deactivated successfully. Oct 2 19:30:37.935820 env[1740]: time="2023-10-02T19:30:37.935750851Z" level=info msg="shim disconnected" id=e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df Oct 2 19:30:37.936252 env[1740]: time="2023-10-02T19:30:37.936208938Z" level=warning msg="cleaning up after shim disconnected" id=e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df namespace=k8s.io Oct 2 19:30:37.936438 env[1740]: time="2023-10-02T19:30:37.936408299Z" level=info msg="cleaning up dead shim" Oct 2 19:30:37.961677 env[1740]: time="2023-10-02T19:30:37.961611585Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3108 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:37.962365 env[1740]: time="2023-10-02T19:30:37.962288221Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:30:37.965800 env[1740]: time="2023-10-02T19:30:37.962831042Z" level=error msg="Failed to pipe stdout of container \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\"" error="reading from a closed fifo" Oct 2 19:30:37.965988 env[1740]: time="2023-10-02T19:30:37.965753016Z" level=error msg="Failed to pipe stderr of container \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\"" error="reading from a closed fifo" Oct 2 19:30:37.968317 env[1740]: time="2023-10-02T19:30:37.968232058Z" level=error msg="StartContainer for \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:37.968625 kubelet[2196]: E1002 19:30:37.968560 2196 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df" Oct 2 19:30:37.968908 kubelet[2196]: E1002 19:30:37.968754 2196 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:37.968908 kubelet[2196]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:37.968908 kubelet[2196]: rm /hostbin/cilium-mount Oct 2 19:30:37.968908 kubelet[2196]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-grpxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:37.968908 kubelet[2196]: E1002 19:30:37.968828 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hbvfj" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" Oct 2 19:30:38.228316 kubelet[2196]: E1002 19:30:38.228262 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:38.810677 kubelet[2196]: I1002 19:30:38.810630 2196 scope.go:117] "RemoveContainer" containerID="7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81" Oct 2 19:30:38.811363 kubelet[2196]: I1002 19:30:38.811320 2196 scope.go:117] "RemoveContainer" containerID="7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81" Oct 2 19:30:38.821840 env[1740]: time="2023-10-02T19:30:38.821787014Z" level=info msg="RemoveContainer for \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\"" Oct 2 19:30:38.825307 env[1740]: time="2023-10-02T19:30:38.825235153Z" level=info msg="RemoveContainer for \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\"" Oct 2 19:30:38.825486 env[1740]: time="2023-10-02T19:30:38.825375028Z" level=error msg="RemoveContainer for \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\" failed" error="rpc error: code = NotFound desc = get container info: container \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\" in namespace \"k8s.io\": not found" Oct 2 19:30:38.825755 kubelet[2196]: E1002 19:30:38.825669 2196 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\" in namespace \"k8s.io\": not found" containerID="7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81" Oct 2 19:30:38.825871 kubelet[2196]: E1002 19:30:38.825810 2196 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81" in namespace "k8s.io": not found; Skipping pod "cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26)" Oct 2 19:30:38.826352 kubelet[2196]: E1002 19:30:38.826308 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26)\"" pod="kube-system/cilium-hbvfj" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" Oct 2 19:30:38.827129 env[1740]: time="2023-10-02T19:30:38.827078252Z" level=info msg="RemoveContainer for \"7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81\" returns successfully" Oct 2 19:30:38.906159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131334756.mount: Deactivated successfully. Oct 2 19:30:39.229451 kubelet[2196]: E1002 19:30:39.229379 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:39.277370 kubelet[2196]: E1002 19:30:39.277324 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:40.022471 env[1740]: time="2023-10-02T19:30:40.022403063Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:40.025254 env[1740]: time="2023-10-02T19:30:40.025192961Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:40.027924 env[1740]: time="2023-10-02T19:30:40.027874762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:40.028970 env[1740]: time="2023-10-02T19:30:40.028923527Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 2 19:30:40.033309 env[1740]: time="2023-10-02T19:30:40.033256686Z" level=info msg="CreateContainer within sandbox \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:30:40.051996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112611992.mount: Deactivated successfully. Oct 2 19:30:40.065101 env[1740]: time="2023-10-02T19:30:40.065020535Z" level=info msg="CreateContainer within sandbox \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674\"" Oct 2 19:30:40.066672 env[1740]: time="2023-10-02T19:30:40.066627181Z" level=info msg="StartContainer for \"16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674\"" Oct 2 19:30:40.121745 systemd[1]: Started cri-containerd-16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674.scope. Oct 2 19:30:40.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.163000 audit: BPF prog-id=95 op=LOAD Oct 2 19:30:40.164000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.164000 audit[3128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2991 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:40.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136656639353064346335306438643135633835663266646232663339 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit[3128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2991 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:40.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136656639353064346335306438643135633835663266646232663339 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.165000 audit: BPF prog-id=96 op=LOAD Oct 2 19:30:40.165000 audit[3128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2991 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:40.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136656639353064346335306438643135633835663266646232663339 Oct 2 19:30:40.167000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.167000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.167000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.167000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.167000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.167000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.167000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.167000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.167000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.167000 audit: BPF prog-id=97 op=LOAD Oct 2 19:30:40.167000 audit[3128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2991 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:40.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136656639353064346335306438643135633835663266646232663339 Oct 2 19:30:40.169000 audit: BPF prog-id=97 op=UNLOAD Oct 2 19:30:40.169000 audit: BPF prog-id=96 op=UNLOAD Oct 2 19:30:40.169000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.169000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.169000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.169000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.169000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.169000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.169000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.169000 audit[3128]: AVC avc: denied { perfmon } for pid=3128 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.169000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.169000 audit[3128]: AVC avc: denied { bpf } for pid=3128 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:40.169000 audit: BPF prog-id=98 op=LOAD Oct 2 19:30:40.169000 audit[3128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2991 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:40.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136656639353064346335306438643135633835663266646232663339 Oct 2 19:30:40.201169 env[1740]: time="2023-10-02T19:30:40.201088529Z" level=info msg="StartContainer for \"16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674\" returns successfully" Oct 2 19:30:40.230201 kubelet[2196]: E1002 19:30:40.230128 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:40.274000 audit[3139]: AVC avc: denied { map_create } for pid=3139 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c737,c927 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c737,c927 tclass=bpf permissive=0 Oct 2 19:30:40.274000 audit[3139]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=400062f768 a2=48 a3=0 items=0 ppid=2991 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c737,c927 key=(null) Oct 2 19:30:40.274000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:30:40.820428 kubelet[2196]: W1002 19:30:40.820381 2196 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac545ceb_d093_4cae_a8b7_5fbb02efaa26.slice/cri-containerd-7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81.scope WatchSource:0}: container "7d438647f662046b8b7e9f289ad828d597752d9e076957587786f0c9e0943f81" in namespace "k8s.io": not found Oct 2 19:30:40.833058 kubelet[2196]: I1002 19:30:40.833022 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-82hlv" podStartSLOduration=2.418182554 podCreationTimestamp="2023-10-02 19:30:36 +0000 UTC" firstStartedPulling="2023-10-02 19:30:37.614618474 +0000 UTC m=+205.245567496" lastFinishedPulling="2023-10-02 19:30:40.029380402 +0000 UTC m=+207.660329424" observedRunningTime="2023-10-02 19:30:40.83242799 +0000 UTC m=+208.463377036" watchObservedRunningTime="2023-10-02 19:30:40.832944482 +0000 UTC m=+208.463893504" Oct 2 19:30:41.048198 systemd[1]: run-containerd-runc-k8s.io-16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674-runc.8USAEw.mount: Deactivated successfully. Oct 2 19:30:41.230673 kubelet[2196]: E1002 19:30:41.230627 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:42.232456 kubelet[2196]: E1002 19:30:42.232376 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:43.234069 kubelet[2196]: E1002 19:30:43.233994 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:43.929208 kubelet[2196]: W1002 19:30:43.929142 2196 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac545ceb_d093_4cae_a8b7_5fbb02efaa26.slice/cri-containerd-e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df.scope WatchSource:0}: task e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df not found: not found Oct 2 19:30:44.234358 kubelet[2196]: E1002 19:30:44.234314 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:44.278020 kubelet[2196]: E1002 19:30:44.277952 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:45.235365 kubelet[2196]: E1002 19:30:45.235264 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:46.235544 kubelet[2196]: E1002 19:30:46.235471 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:47.236262 kubelet[2196]: E1002 19:30:47.236184 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:48.236585 kubelet[2196]: E1002 19:30:48.236520 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:49.237384 kubelet[2196]: E1002 19:30:49.237316 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:49.279471 kubelet[2196]: E1002 19:30:49.279441 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:49.314522 env[1740]: time="2023-10-02T19:30:49.314448825Z" level=info msg="CreateContainer within sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:30:49.334547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3847018375.mount: Deactivated successfully. Oct 2 19:30:49.346134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1353441074.mount: Deactivated successfully. Oct 2 19:30:49.356161 env[1740]: time="2023-10-02T19:30:49.356066285Z" level=info msg="CreateContainer within sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\"" Oct 2 19:30:49.357071 env[1740]: time="2023-10-02T19:30:49.357023067Z" level=info msg="StartContainer for \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\"" Oct 2 19:30:49.405638 systemd[1]: Started cri-containerd-4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad.scope. Oct 2 19:30:49.443926 systemd[1]: cri-containerd-4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad.scope: Deactivated successfully. Oct 2 19:30:49.655571 env[1740]: time="2023-10-02T19:30:49.655390741Z" level=info msg="shim disconnected" id=4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad Oct 2 19:30:49.655571 env[1740]: time="2023-10-02T19:30:49.655463965Z" level=warning msg="cleaning up after shim disconnected" id=4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad namespace=k8s.io Oct 2 19:30:49.655571 env[1740]: time="2023-10-02T19:30:49.655486921Z" level=info msg="cleaning up dead shim" Oct 2 19:30:49.682450 env[1740]: time="2023-10-02T19:30:49.682380403Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3184 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:49.683093 env[1740]: time="2023-10-02T19:30:49.682843591Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:30:49.683392 env[1740]: time="2023-10-02T19:30:49.683336370Z" level=error msg="Failed to pipe stdout of container \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\"" error="reading from a closed fifo" Oct 2 19:30:49.683605 env[1740]: time="2023-10-02T19:30:49.683526414Z" level=error msg="Failed to pipe stderr of container \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\"" error="reading from a closed fifo" Oct 2 19:30:49.685841 env[1740]: time="2023-10-02T19:30:49.685772642Z" level=error msg="StartContainer for \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:49.686144 kubelet[2196]: E1002 19:30:49.686090 2196 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad" Oct 2 19:30:49.686382 kubelet[2196]: E1002 19:30:49.686245 2196 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:49.686382 kubelet[2196]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:49.686382 kubelet[2196]: rm /hostbin/cilium-mount Oct 2 19:30:49.686382 kubelet[2196]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-grpxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:49.686382 kubelet[2196]: E1002 19:30:49.686310 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hbvfj" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" Oct 2 19:30:49.841087 kubelet[2196]: I1002 19:30:49.841051 2196 scope.go:117] "RemoveContainer" containerID="e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df" Oct 2 19:30:49.842084 kubelet[2196]: I1002 19:30:49.842039 2196 scope.go:117] "RemoveContainer" containerID="e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df" Oct 2 19:30:49.844902 env[1740]: time="2023-10-02T19:30:49.844845010Z" level=info msg="RemoveContainer for \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\"" Oct 2 19:30:49.847571 env[1740]: time="2023-10-02T19:30:49.847520766Z" level=info msg="RemoveContainer for \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\"" Oct 2 19:30:49.848001 env[1740]: time="2023-10-02T19:30:49.847947209Z" level=error msg="RemoveContainer for \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\" failed" error="failed to set removing state for container \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\": container is already in removing state" Oct 2 19:30:49.848444 kubelet[2196]: E1002 19:30:49.848396 2196 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\": container is already in removing state" containerID="e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df" Oct 2 19:30:49.848614 kubelet[2196]: E1002 19:30:49.848462 2196 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df": container is already in removing state; Skipping pod "cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26)" Oct 2 19:30:49.849059 kubelet[2196]: E1002 19:30:49.849019 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26)\"" pod="kube-system/cilium-hbvfj" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" Oct 2 19:30:49.851094 env[1740]: time="2023-10-02T19:30:49.851033556Z" level=info msg="RemoveContainer for \"e0ee9c4c1d991273bebcfec41d8281bfac4dd544f1a964e6bdf8128e41b5a2df\" returns successfully" Oct 2 19:30:50.237868 kubelet[2196]: E1002 19:30:50.237822 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:50.329851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad-rootfs.mount: Deactivated successfully. Oct 2 19:30:51.239091 kubelet[2196]: E1002 19:30:51.239023 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:52.239608 kubelet[2196]: E1002 19:30:52.239559 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:52.762160 kubelet[2196]: W1002 19:30:52.762112 2196 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac545ceb_d093_4cae_a8b7_5fbb02efaa26.slice/cri-containerd-4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad.scope WatchSource:0}: task 4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad not found: not found Oct 2 19:30:53.240726 kubelet[2196]: E1002 19:30:53.240655 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:54.021013 kubelet[2196]: E1002 19:30:54.020949 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:54.241268 kubelet[2196]: E1002 19:30:54.241220 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:54.280997 kubelet[2196]: E1002 19:30:54.280850 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:55.242872 kubelet[2196]: E1002 19:30:55.242800 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:56.243030 kubelet[2196]: E1002 19:30:56.242959 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:57.244145 kubelet[2196]: E1002 19:30:57.244092 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:58.245229 kubelet[2196]: E1002 19:30:58.245184 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:59.246143 kubelet[2196]: E1002 19:30:59.246095 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:59.281997 kubelet[2196]: E1002 19:30:59.281947 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:00.247302 kubelet[2196]: E1002 19:31:00.247255 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:01.248433 kubelet[2196]: E1002 19:31:01.248387 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:02.249508 kubelet[2196]: E1002 19:31:02.249467 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:03.251288 kubelet[2196]: E1002 19:31:03.251218 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:03.310710 kubelet[2196]: E1002 19:31:03.310645 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26)\"" pod="kube-system/cilium-hbvfj" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" Oct 2 19:31:04.252356 kubelet[2196]: E1002 19:31:04.252284 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:04.282808 kubelet[2196]: E1002 19:31:04.282768 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:05.253414 kubelet[2196]: E1002 19:31:05.253344 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:06.254565 kubelet[2196]: E1002 19:31:06.254494 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:07.254941 kubelet[2196]: E1002 19:31:07.254874 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:08.255087 kubelet[2196]: E1002 19:31:08.255003 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:09.255930 kubelet[2196]: E1002 19:31:09.255862 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:09.284271 kubelet[2196]: E1002 19:31:09.284222 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:10.257060 kubelet[2196]: E1002 19:31:10.256988 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:11.258197 kubelet[2196]: E1002 19:31:11.258150 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:12.259378 kubelet[2196]: E1002 19:31:12.259332 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:13.260804 kubelet[2196]: E1002 19:31:13.260747 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:14.021450 kubelet[2196]: E1002 19:31:14.021391 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:14.074361 env[1740]: time="2023-10-02T19:31:14.074058912Z" level=info msg="StopPodSandbox for \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\"" Oct 2 19:31:14.074361 env[1740]: time="2023-10-02T19:31:14.074194489Z" level=info msg="TearDown network for sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" successfully" Oct 2 19:31:14.074361 env[1740]: time="2023-10-02T19:31:14.074249449Z" level=info msg="StopPodSandbox for \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" returns successfully" Oct 2 19:31:14.075799 env[1740]: time="2023-10-02T19:31:14.075351944Z" level=info msg="RemovePodSandbox for \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\"" Oct 2 19:31:14.075799 env[1740]: time="2023-10-02T19:31:14.075406652Z" level=info msg="Forcibly stopping sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\"" Oct 2 19:31:14.075799 env[1740]: time="2023-10-02T19:31:14.075526821Z" level=info msg="TearDown network for sandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" successfully" Oct 2 19:31:14.080612 env[1740]: time="2023-10-02T19:31:14.080546981Z" level=info msg="RemovePodSandbox \"a036875f2e46aa00c47ff678e26583be42eb4bcf7d15f5be8bec923b2d672acb\" returns successfully" Oct 2 19:31:14.262538 kubelet[2196]: E1002 19:31:14.262475 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:14.286249 kubelet[2196]: E1002 19:31:14.285568 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:15.263612 kubelet[2196]: E1002 19:31:15.263545 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:16.264347 kubelet[2196]: E1002 19:31:16.264289 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:17.265544 kubelet[2196]: E1002 19:31:17.265493 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:18.266566 kubelet[2196]: E1002 19:31:18.266492 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:18.314378 env[1740]: time="2023-10-02T19:31:18.314326659Z" level=info msg="CreateContainer within sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:31:18.332818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351960096.mount: Deactivated successfully. Oct 2 19:31:18.343110 env[1740]: time="2023-10-02T19:31:18.343029117Z" level=info msg="CreateContainer within sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f\"" Oct 2 19:31:18.344124 env[1740]: time="2023-10-02T19:31:18.344070365Z" level=info msg="StartContainer for \"47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f\"" Oct 2 19:31:18.397364 systemd[1]: Started cri-containerd-47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f.scope. Oct 2 19:31:18.435983 systemd[1]: cri-containerd-47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f.scope: Deactivated successfully. Oct 2 19:31:18.458172 env[1740]: time="2023-10-02T19:31:18.458097324Z" level=info msg="shim disconnected" id=47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f Oct 2 19:31:18.458449 env[1740]: time="2023-10-02T19:31:18.458175096Z" level=warning msg="cleaning up after shim disconnected" id=47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f namespace=k8s.io Oct 2 19:31:18.458449 env[1740]: time="2023-10-02T19:31:18.458197776Z" level=info msg="cleaning up dead shim" Oct 2 19:31:18.485579 env[1740]: time="2023-10-02T19:31:18.485346415Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3224 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:31:18Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:31:18.486103 env[1740]: time="2023-10-02T19:31:18.485999112Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:31:18.486918 env[1740]: time="2023-10-02T19:31:18.486847134Z" level=error msg="Failed to pipe stdout of container \"47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f\"" error="reading from a closed fifo" Oct 2 19:31:18.488819 env[1740]: time="2023-10-02T19:31:18.488759156Z" level=error msg="Failed to pipe stderr of container \"47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f\"" error="reading from a closed fifo" Oct 2 19:31:18.491118 env[1740]: time="2023-10-02T19:31:18.491032381Z" level=error msg="StartContainer for \"47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:31:18.491423 kubelet[2196]: E1002 19:31:18.491363 2196 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f" Oct 2 19:31:18.491570 kubelet[2196]: E1002 19:31:18.491524 2196 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:18.491570 kubelet[2196]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:18.491570 kubelet[2196]: rm /hostbin/cilium-mount Oct 2 19:31:18.491570 kubelet[2196]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-grpxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:31:18.492008 kubelet[2196]: E1002 19:31:18.491591 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hbvfj" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" Oct 2 19:31:18.908560 kubelet[2196]: I1002 19:31:18.908514 2196 scope.go:117] "RemoveContainer" containerID="4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad" Oct 2 19:31:18.910590 kubelet[2196]: I1002 19:31:18.910542 2196 scope.go:117] "RemoveContainer" containerID="4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad" Oct 2 19:31:18.911812 env[1740]: time="2023-10-02T19:31:18.911735276Z" level=info msg="RemoveContainer for \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\"" Oct 2 19:31:18.914772 env[1740]: time="2023-10-02T19:31:18.914689985Z" level=info msg="RemoveContainer for \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\"" Oct 2 19:31:18.914945 env[1740]: time="2023-10-02T19:31:18.914841882Z" level=error msg="RemoveContainer for \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\" failed" error="rpc error: code = NotFound desc = get container info: container \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\" in namespace \"k8s.io\": not found" Oct 2 19:31:18.915249 kubelet[2196]: E1002 19:31:18.915219 2196 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\" in namespace \"k8s.io\": not found" containerID="4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad" Oct 2 19:31:18.915461 kubelet[2196]: E1002 19:31:18.915425 2196 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad" in namespace "k8s.io": not found; Skipping pod "cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26)" Oct 2 19:31:18.916266 kubelet[2196]: E1002 19:31:18.916237 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26)\"" pod="kube-system/cilium-hbvfj" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" Oct 2 19:31:18.916869 env[1740]: time="2023-10-02T19:31:18.916792305Z" level=info msg="RemoveContainer for \"4891a2c33e4c65ad80956c21eeda8963eb2fa32d1ab25b7d54eede5815e31bad\" returns successfully" Oct 2 19:31:19.267186 kubelet[2196]: E1002 19:31:19.267109 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:19.287230 kubelet[2196]: E1002 19:31:19.287179 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:19.326437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f-rootfs.mount: Deactivated successfully. Oct 2 19:31:20.268852 kubelet[2196]: E1002 19:31:20.268802 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:21.270646 kubelet[2196]: E1002 19:31:21.270599 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:21.564612 kubelet[2196]: W1002 19:31:21.564475 2196 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac545ceb_d093_4cae_a8b7_5fbb02efaa26.slice/cri-containerd-47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f.scope WatchSource:0}: task 47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f not found: not found Oct 2 19:31:22.271820 kubelet[2196]: E1002 19:31:22.271776 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:23.272936 kubelet[2196]: E1002 19:31:23.272869 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.273138 kubelet[2196]: E1002 19:31:24.273004 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.288038 kubelet[2196]: E1002 19:31:24.287991 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:25.273164 kubelet[2196]: E1002 19:31:25.273121 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:26.274531 kubelet[2196]: E1002 19:31:26.274432 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:27.275061 kubelet[2196]: E1002 19:31:27.274990 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:28.276230 kubelet[2196]: E1002 19:31:28.276162 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:29.277192 kubelet[2196]: E1002 19:31:29.277146 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:29.289121 kubelet[2196]: E1002 19:31:29.289092 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:30.278591 kubelet[2196]: E1002 19:31:30.278519 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:31.279500 kubelet[2196]: E1002 19:31:31.279427 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:32.279729 kubelet[2196]: E1002 19:31:32.279655 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:33.280652 kubelet[2196]: E1002 19:31:33.280610 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:33.311731 kubelet[2196]: E1002 19:31:33.311656 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-hbvfj_kube-system(ac545ceb-d093-4cae-a8b7-5fbb02efaa26)\"" pod="kube-system/cilium-hbvfj" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" Oct 2 19:31:34.020829 kubelet[2196]: E1002 19:31:34.020790 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:34.282328 kubelet[2196]: E1002 19:31:34.282189 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:34.290474 kubelet[2196]: E1002 19:31:34.290442 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:35.283364 kubelet[2196]: E1002 19:31:35.283292 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:36.283991 kubelet[2196]: E1002 19:31:36.283892 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:37.284868 kubelet[2196]: E1002 19:31:37.284820 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:38.286143 kubelet[2196]: E1002 19:31:38.286079 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:38.697678 env[1740]: time="2023-10-02T19:31:38.694653405Z" level=info msg="StopPodSandbox for \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\"" Oct 2 19:31:38.697678 env[1740]: time="2023-10-02T19:31:38.694873499Z" level=info msg="Container to stop \"47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:31:38.697372 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653-shm.mount: Deactivated successfully. Oct 2 19:31:38.716577 systemd[1]: cri-containerd-47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653.scope: Deactivated successfully. Oct 2 19:31:38.720070 kernel: kauditd_printk_skb: 164 callbacks suppressed Oct 2 19:31:38.720293 kernel: audit: type=1334 audit(1696275098.715:778): prog-id=87 op=UNLOAD Oct 2 19:31:38.715000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:31:38.723000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:31:38.728763 kernel: audit: type=1334 audit(1696275098.723:779): prog-id=90 op=UNLOAD Oct 2 19:31:38.764574 env[1740]: time="2023-10-02T19:31:38.764520833Z" level=info msg="StopContainer for \"16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674\" with timeout 30 (s)" Oct 2 19:31:38.765523 env[1740]: time="2023-10-02T19:31:38.765479920Z" level=info msg="Stop container \"16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674\" with signal terminated" Oct 2 19:31:38.769770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653-rootfs.mount: Deactivated successfully. Oct 2 19:31:38.788321 env[1740]: time="2023-10-02T19:31:38.788257153Z" level=info msg="shim disconnected" id=47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653 Oct 2 19:31:38.789744 env[1740]: time="2023-10-02T19:31:38.789662741Z" level=warning msg="cleaning up after shim disconnected" id=47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653 namespace=k8s.io Oct 2 19:31:38.789982 env[1740]: time="2023-10-02T19:31:38.789951044Z" level=info msg="cleaning up dead shim" Oct 2 19:31:38.798000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:31:38.799805 systemd[1]: cri-containerd-16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674.scope: Deactivated successfully. Oct 2 19:31:38.803769 kernel: audit: type=1334 audit(1696275098.798:780): prog-id=95 op=UNLOAD Oct 2 19:31:38.804000 audit: BPF prog-id=98 op=UNLOAD Oct 2 19:31:38.808756 kernel: audit: type=1334 audit(1696275098.804:781): prog-id=98 op=UNLOAD Oct 2 19:31:38.833302 env[1740]: time="2023-10-02T19:31:38.833248056Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3263 runtime=io.containerd.runc.v2\n" Oct 2 19:31:38.834191 env[1740]: time="2023-10-02T19:31:38.834130978Z" level=info msg="TearDown network for sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" successfully" Oct 2 19:31:38.834623 env[1740]: time="2023-10-02T19:31:38.834559671Z" level=info msg="StopPodSandbox for \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" returns successfully" Oct 2 19:31:38.857640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674-rootfs.mount: Deactivated successfully. Oct 2 19:31:38.872224 env[1740]: time="2023-10-02T19:31:38.872159881Z" level=info msg="shim disconnected" id=16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674 Oct 2 19:31:38.872651 env[1740]: time="2023-10-02T19:31:38.872617087Z" level=warning msg="cleaning up after shim disconnected" id=16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674 namespace=k8s.io Oct 2 19:31:38.872822 env[1740]: time="2023-10-02T19:31:38.872792157Z" level=info msg="cleaning up dead shim" Oct 2 19:31:38.899002 env[1740]: time="2023-10-02T19:31:38.898939040Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3291 runtime=io.containerd.runc.v2\n" Oct 2 19:31:38.901572 env[1740]: time="2023-10-02T19:31:38.901497949Z" level=info msg="StopContainer for \"16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674\" returns successfully" Oct 2 19:31:38.902382 env[1740]: time="2023-10-02T19:31:38.902316143Z" level=info msg="StopPodSandbox for \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\"" Oct 2 19:31:38.902620 env[1740]: time="2023-10-02T19:31:38.902404860Z" level=info msg="Container to stop \"16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:31:38.904930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1-shm.mount: Deactivated successfully. Oct 2 19:31:38.924492 systemd[1]: cri-containerd-cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1.scope: Deactivated successfully. Oct 2 19:31:38.923000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:31:38.927870 kernel: audit: type=1334 audit(1696275098.923:782): prog-id=91 op=UNLOAD Oct 2 19:31:38.927000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:31:38.931811 kernel: audit: type=1334 audit(1696275098.927:783): prog-id=94 op=UNLOAD Oct 2 19:31:38.959580 kubelet[2196]: I1002 19:31:38.957467 2196 scope.go:117] "RemoveContainer" containerID="47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f" Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.959943 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-hubble-tls\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960002 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-hostproc\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960042 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-host-proc-sys-net\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960090 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grpxb\" (UniqueName: \"kubernetes.io/projected/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-kube-api-access-grpxb\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960130 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-cgroup\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960173 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-ipsec-secrets\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960210 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-lib-modules\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960256 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cni-path\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960296 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-host-proc-sys-kernel\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960342 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-clustermesh-secrets\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960382 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-xtables-lock\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960427 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-config-path\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960467 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-etc-cni-netd\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960506 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-run\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960543 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-bpf-maps\") pod \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\" (UID: \"ac545ceb-d093-4cae-a8b7-5fbb02efaa26\") " Oct 2 19:31:38.962751 kubelet[2196]: I1002 19:31:38.960619 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:38.963890 kubelet[2196]: I1002 19:31:38.961176 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cni-path" (OuterVolumeSpecName: "cni-path") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:38.963890 kubelet[2196]: I1002 19:31:38.961242 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-hostproc" (OuterVolumeSpecName: "hostproc") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:38.963890 kubelet[2196]: I1002 19:31:38.961284 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:38.963890 kubelet[2196]: I1002 19:31:38.961651 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:38.963890 kubelet[2196]: I1002 19:31:38.962103 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:38.967947 kubelet[2196]: I1002 19:31:38.967881 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:38.970661 kubelet[2196]: I1002 19:31:38.968132 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:38.970906 kubelet[2196]: I1002 19:31:38.970128 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:38.971027 kubelet[2196]: I1002 19:31:38.970595 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:38.973197 env[1740]: time="2023-10-02T19:31:38.973137078Z" level=info msg="RemoveContainer for \"47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f\"" Oct 2 19:31:38.978830 systemd[1]: var-lib-kubelet-pods-ac545ceb\x2dd093\x2d4cae\x2da8b7\x2d5fbb02efaa26-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:31:38.981084 env[1740]: time="2023-10-02T19:31:38.980607799Z" level=info msg="RemoveContainer for \"47c0c84d3ba7a281ac2cd81fbb791545a3c2974e5b287638e2e53284cd839b5f\" returns successfully" Oct 2 19:31:38.982431 kubelet[2196]: I1002 19:31:38.982384 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:31:38.988013 kubelet[2196]: I1002 19:31:38.987961 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:31:38.995340 kubelet[2196]: I1002 19:31:38.995287 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:31:39.003369 kubelet[2196]: I1002 19:31:39.003239 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-kube-api-access-grpxb" (OuterVolumeSpecName: "kube-api-access-grpxb") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "kube-api-access-grpxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:31:39.003772 kubelet[2196]: I1002 19:31:39.003732 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ac545ceb-d093-4cae-a8b7-5fbb02efaa26" (UID: "ac545ceb-d093-4cae-a8b7-5fbb02efaa26"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:31:39.026297 env[1740]: time="2023-10-02T19:31:39.025977779Z" level=info msg="shim disconnected" id=cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1 Oct 2 19:31:39.026297 env[1740]: time="2023-10-02T19:31:39.026045004Z" level=warning msg="cleaning up after shim disconnected" id=cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1 namespace=k8s.io Oct 2 19:31:39.026297 env[1740]: time="2023-10-02T19:31:39.026066748Z" level=info msg="cleaning up dead shim" Oct 2 19:31:39.054510 env[1740]: time="2023-10-02T19:31:39.054455070Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3327 runtime=io.containerd.runc.v2\n" Oct 2 19:31:39.055306 env[1740]: time="2023-10-02T19:31:39.055259344Z" level=info msg="TearDown network for sandbox \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\" successfully" Oct 2 19:31:39.055506 env[1740]: time="2023-10-02T19:31:39.055471578Z" level=info msg="StopPodSandbox for \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\" returns successfully" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060687 2196 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-bpf-maps\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060769 2196 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-xtables-lock\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060797 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-config-path\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060826 2196 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-etc-cni-netd\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060849 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-run\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060872 2196 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-grpxb\" (UniqueName: \"kubernetes.io/projected/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-kube-api-access-grpxb\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060897 2196 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-hubble-tls\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060920 2196 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-hostproc\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060942 2196 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-host-proc-sys-net\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060966 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-cgroup\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.060992 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cilium-ipsec-secrets\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.061015 2196 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-lib-modules\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.061037 2196 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-cni-path\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.061060 2196 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-host-proc-sys-kernel\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.061193 kubelet[2196]: I1002 19:31:39.061082 2196 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac545ceb-d093-4cae-a8b7-5fbb02efaa26-clustermesh-secrets\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.161437 kubelet[2196]: I1002 19:31:39.161379 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhzmb\" (UniqueName: \"kubernetes.io/projected/186acf19-99fa-4b8b-b891-8babc242df77-kube-api-access-dhzmb\") pod \"186acf19-99fa-4b8b-b891-8babc242df77\" (UID: \"186acf19-99fa-4b8b-b891-8babc242df77\") " Oct 2 19:31:39.161618 kubelet[2196]: I1002 19:31:39.161456 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/186acf19-99fa-4b8b-b891-8babc242df77-cilium-config-path\") pod \"186acf19-99fa-4b8b-b891-8babc242df77\" (UID: \"186acf19-99fa-4b8b-b891-8babc242df77\") " Oct 2 19:31:39.168416 kubelet[2196]: I1002 19:31:39.168279 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/186acf19-99fa-4b8b-b891-8babc242df77-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "186acf19-99fa-4b8b-b891-8babc242df77" (UID: "186acf19-99fa-4b8b-b891-8babc242df77"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:31:39.170297 kubelet[2196]: I1002 19:31:39.170247 2196 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/186acf19-99fa-4b8b-b891-8babc242df77-kube-api-access-dhzmb" (OuterVolumeSpecName: "kube-api-access-dhzmb") pod "186acf19-99fa-4b8b-b891-8babc242df77" (UID: "186acf19-99fa-4b8b-b891-8babc242df77"). InnerVolumeSpecName "kube-api-access-dhzmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:31:39.261683 kubelet[2196]: I1002 19:31:39.261624 2196 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dhzmb\" (UniqueName: \"kubernetes.io/projected/186acf19-99fa-4b8b-b891-8babc242df77-kube-api-access-dhzmb\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.261914 kubelet[2196]: I1002 19:31:39.261731 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/186acf19-99fa-4b8b-b891-8babc242df77-cilium-config-path\") on node \"172.31.20.240\" DevicePath \"\"" Oct 2 19:31:39.267791 systemd[1]: Removed slice kubepods-burstable-podac545ceb_d093_4cae_a8b7_5fbb02efaa26.slice. Oct 2 19:31:39.286235 kubelet[2196]: E1002 19:31:39.286197 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:39.292404 kubelet[2196]: E1002 19:31:39.292373 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:39.697296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1-rootfs.mount: Deactivated successfully. Oct 2 19:31:39.697464 systemd[1]: var-lib-kubelet-pods-ac545ceb\x2dd093\x2d4cae\x2da8b7\x2d5fbb02efaa26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgrpxb.mount: Deactivated successfully. Oct 2 19:31:39.697605 systemd[1]: var-lib-kubelet-pods-186acf19\x2d99fa\x2d4b8b\x2db891\x2d8babc242df77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddhzmb.mount: Deactivated successfully. Oct 2 19:31:39.697768 systemd[1]: var-lib-kubelet-pods-ac545ceb\x2dd093\x2d4cae\x2da8b7\x2d5fbb02efaa26-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:31:39.697908 systemd[1]: var-lib-kubelet-pods-ac545ceb\x2dd093\x2d4cae\x2da8b7\x2d5fbb02efaa26-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:31:39.962767 kubelet[2196]: I1002 19:31:39.962162 2196 scope.go:117] "RemoveContainer" containerID="16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674" Oct 2 19:31:39.965449 env[1740]: time="2023-10-02T19:31:39.965360485Z" level=info msg="RemoveContainer for \"16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674\"" Oct 2 19:31:39.969257 env[1740]: time="2023-10-02T19:31:39.969189129Z" level=info msg="RemoveContainer for \"16ef950d4c50d8d15c85f2fdb2f39dbd955fda3c3150168f57ad4688c80fe674\" returns successfully" Oct 2 19:31:39.973533 systemd[1]: Removed slice kubepods-besteffort-pod186acf19_99fa_4b8b_b891_8babc242df77.slice. Oct 2 19:31:40.287343 kubelet[2196]: E1002 19:31:40.287297 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:40.314597 kubelet[2196]: I1002 19:31:40.314542 2196 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="186acf19-99fa-4b8b-b891-8babc242df77" path="/var/lib/kubelet/pods/186acf19-99fa-4b8b-b891-8babc242df77/volumes" Oct 2 19:31:40.315634 kubelet[2196]: I1002 19:31:40.315588 2196 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ac545ceb-d093-4cae-a8b7-5fbb02efaa26" path="/var/lib/kubelet/pods/ac545ceb-d093-4cae-a8b7-5fbb02efaa26/volumes" Oct 2 19:31:41.289014 kubelet[2196]: E1002 19:31:41.288965 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:42.289943 kubelet[2196]: E1002 19:31:42.289901 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:43.290784 kubelet[2196]: E1002 19:31:43.290679 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.291461 kubelet[2196]: E1002 19:31:44.291427 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.293442 kubelet[2196]: E1002 19:31:44.293408 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:45.292365 kubelet[2196]: E1002 19:31:45.292317 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:46.294020 kubelet[2196]: E1002 19:31:46.293948 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:47.294638 kubelet[2196]: E1002 19:31:47.294579 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:48.295202 kubelet[2196]: E1002 19:31:48.295141 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:49.295091 kubelet[2196]: E1002 19:31:49.295025 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:49.295453 kubelet[2196]: E1002 19:31:49.295424 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:50.296634 kubelet[2196]: E1002 19:31:50.296572 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:51.297575 kubelet[2196]: E1002 19:31:51.297506 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:52.298281 kubelet[2196]: E1002 19:31:52.298221 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:53.299104 kubelet[2196]: E1002 19:31:53.299036 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:54.021575 kubelet[2196]: E1002 19:31:54.021520 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:54.296482 kubelet[2196]: E1002 19:31:54.295982 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:54.309348 kubelet[2196]: E1002 19:31:54.299955 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:55.300633 kubelet[2196]: E1002 19:31:55.300562 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:56.301628 kubelet[2196]: E1002 19:31:56.301555 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:57.302233 kubelet[2196]: E1002 19:31:57.302189 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:58.303996 kubelet[2196]: E1002 19:31:58.303922 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:59.059435 amazon-ssm-agent[1711]: 2023-10-02 19:31:59 INFO Backing off health check to every 600 seconds for 1800 seconds. Oct 2 19:31:59.159823 amazon-ssm-agent[1711]: 2023-10-02 19:31:59 ERROR Health ping failed with error - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-0f7d01baa5c7af7cb is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-0f7d01baa5c7af7cb because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:31:59.159823 amazon-ssm-agent[1711]: status code: 400, request id: 2fdbb4ef-9454-41cb-ac39-9b6647c6e805 Oct 2 19:31:59.297449 kubelet[2196]: E1002 19:31:59.297354 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:59.304670 kubelet[2196]: E1002 19:31:59.304604 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:00.305741 kubelet[2196]: E1002 19:32:00.305629 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:00.934777 kubelet[2196]: E1002 19:32:00.934686 2196 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 2 19:32:01.306439 kubelet[2196]: E1002 19:32:01.306400 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:02.307404 kubelet[2196]: E1002 19:32:02.307370 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:03.308715 kubelet[2196]: E1002 19:32:03.308659 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:04.298142 kubelet[2196]: E1002 19:32:04.298089 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:04.309655 kubelet[2196]: E1002 19:32:04.309617 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:05.310556 kubelet[2196]: E1002 19:32:05.310514 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:06.311920 kubelet[2196]: E1002 19:32:06.311867 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:07.312288 kubelet[2196]: E1002 19:32:07.312252 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:08.313262 kubelet[2196]: E1002 19:32:08.313226 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:09.299344 kubelet[2196]: E1002 19:32:09.299312 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:09.313995 kubelet[2196]: E1002 19:32:09.313942 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:10.314680 kubelet[2196]: E1002 19:32:10.314645 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:10.936178 kubelet[2196]: E1002 19:32:10.935871 2196 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 2 19:32:11.316457 kubelet[2196]: E1002 19:32:11.316407 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:12.317234 kubelet[2196]: E1002 19:32:12.317172 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:13.318092 kubelet[2196]: E1002 19:32:13.318031 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.021078 kubelet[2196]: E1002 19:32:14.021024 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.084996 env[1740]: time="2023-10-02T19:32:14.084918625Z" level=info msg="StopPodSandbox for \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\"" Oct 2 19:32:14.085611 env[1740]: time="2023-10-02T19:32:14.085109956Z" level=info msg="TearDown network for sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" successfully" Oct 2 19:32:14.085611 env[1740]: time="2023-10-02T19:32:14.085205898Z" level=info msg="StopPodSandbox for \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" returns successfully" Oct 2 19:32:14.090052 env[1740]: time="2023-10-02T19:32:14.089999568Z" level=info msg="RemovePodSandbox for \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\"" Oct 2 19:32:14.090301 env[1740]: time="2023-10-02T19:32:14.090243160Z" level=info msg="Forcibly stopping sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\"" Oct 2 19:32:14.090500 env[1740]: time="2023-10-02T19:32:14.090467264Z" level=info msg="TearDown network for sandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" successfully" Oct 2 19:32:14.098601 env[1740]: time="2023-10-02T19:32:14.098496966Z" level=info msg="RemovePodSandbox \"47186e7c373ae73671249c01a220b3735249cd1374203cb46fbace2ae6941653\" returns successfully" Oct 2 19:32:14.100404 env[1740]: time="2023-10-02T19:32:14.100318716Z" level=info msg="StopPodSandbox for \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\"" Oct 2 19:32:14.101022 env[1740]: time="2023-10-02T19:32:14.100930714Z" level=info msg="TearDown network for sandbox \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\" successfully" Oct 2 19:32:14.101237 env[1740]: time="2023-10-02T19:32:14.101181686Z" level=info msg="StopPodSandbox for \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\" returns successfully" Oct 2 19:32:14.102835 env[1740]: time="2023-10-02T19:32:14.102773236Z" level=info msg="RemovePodSandbox for \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\"" Oct 2 19:32:14.102960 env[1740]: time="2023-10-02T19:32:14.102831329Z" level=info msg="Forcibly stopping sandbox \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\"" Oct 2 19:32:14.103059 env[1740]: time="2023-10-02T19:32:14.102951787Z" level=info msg="TearDown network for sandbox \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\" successfully" Oct 2 19:32:14.107129 env[1740]: time="2023-10-02T19:32:14.107066006Z" level=info msg="RemovePodSandbox \"cbb6e9e3a3a4313e0dbf5b2b06d93cda02a4847c799ea4f1a6d1ba384f3d5ce1\" returns successfully" Oct 2 19:32:14.146661 kubelet[2196]: W1002 19:32:14.146614 2196 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:32:14.301043 kubelet[2196]: E1002 19:32:14.300899 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:14.318655 kubelet[2196]: E1002 19:32:14.318621 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.321664 kubelet[2196]: E1002 19:32:14.321623 2196 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": unexpected EOF" Oct 2 19:32:14.322308 kubelet[2196]: E1002 19:32:14.322278 2196 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": dial tcp 172.31.27.184:6443: connect: connection refused" Oct 2 19:32:14.322918 kubelet[2196]: E1002 19:32:14.322888 2196 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": dial tcp 172.31.27.184:6443: connect: connection refused" Oct 2 19:32:14.323120 kubelet[2196]: I1002 19:32:14.323097 2196 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Oct 2 19:32:14.323640 kubelet[2196]: E1002 19:32:14.323615 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": dial tcp 172.31.27.184:6443: connect: connection refused" interval="200ms" Oct 2 19:32:14.525297 kubelet[2196]: E1002 19:32:14.525263 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": dial tcp 172.31.27.184:6443: connect: connection refused" interval="400ms" Oct 2 19:32:14.927058 kubelet[2196]: E1002 19:32:14.927023 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": dial tcp 172.31.27.184:6443: connect: connection refused" interval="800ms" Oct 2 19:32:15.320320 kubelet[2196]: E1002 19:32:15.320254 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:16.321304 kubelet[2196]: E1002 19:32:16.321263 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:17.322243 kubelet[2196]: E1002 19:32:17.322182 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:18.322606 kubelet[2196]: E1002 19:32:18.322549 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:19.302623 kubelet[2196]: E1002 19:32:19.302530 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:19.323326 kubelet[2196]: E1002 19:32:19.323262 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:20.324416 kubelet[2196]: E1002 19:32:20.324361 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:21.324985 kubelet[2196]: E1002 19:32:21.324951 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:22.326656 kubelet[2196]: E1002 19:32:22.326602 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:23.327828 kubelet[2196]: E1002 19:32:23.327761 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:24.303776 kubelet[2196]: E1002 19:32:24.303742 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:24.328508 kubelet[2196]: E1002 19:32:24.328476 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:25.330004 kubelet[2196]: E1002 19:32:25.329939 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:25.728190 kubelet[2196]: E1002 19:32:25.728155 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Oct 2 19:32:26.331035 kubelet[2196]: E1002 19:32:26.331001 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:27.332012 kubelet[2196]: E1002 19:32:27.331949 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:28.333107 kubelet[2196]: E1002 19:32:28.333073 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:29.305534 kubelet[2196]: E1002 19:32:29.305488 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:29.334562 kubelet[2196]: E1002 19:32:29.334507 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:30.334787 kubelet[2196]: E1002 19:32:30.334752 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:31.335917 kubelet[2196]: E1002 19:32:31.335881 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:32.337375 kubelet[2196]: E1002 19:32:32.337328 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:32.503817 kubelet[2196]: E1002 19:32:32.503769 2196 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.20.240\": Get \"https://172.31.27.184:6443/api/v1/nodes/172.31.20.240?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 2 19:32:33.338340 kubelet[2196]: E1002 19:32:33.338306 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:34.020980 kubelet[2196]: E1002 19:32:34.020921 2196 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:34.307209 kubelet[2196]: E1002 19:32:34.306905 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:34.339182 kubelet[2196]: E1002 19:32:34.339117 2196 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"