Oct 2 19:26:27.164724 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 2 19:26:27.164762 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:26:27.164785 kernel: efi: EFI v2.70 by EDK II Oct 2 19:26:27.164800 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71accf98 Oct 2 19:26:27.164813 kernel: ACPI: Early table checksum verification disabled Oct 2 19:26:27.164827 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 2 19:26:27.164842 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 2 19:26:27.164857 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 19:26:27.164871 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 2 19:26:27.164884 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 19:26:27.164902 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 2 19:26:27.164916 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 2 19:26:27.164930 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 2 19:26:27.164944 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 19:26:27.164960 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 2 19:26:27.164979 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 2 19:26:27.164993 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 2 19:26:27.165008 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 2 19:26:27.165022 kernel: printk: bootconsole [uart0] enabled Oct 2 19:26:27.165036 kernel: NUMA: Failed to initialise from firmware Oct 2 19:26:27.173630 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:26:27.173667 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Oct 2 19:26:27.173683 kernel: Zone ranges: Oct 2 19:26:27.173699 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 2 19:26:27.173714 kernel: DMA32 empty Oct 2 19:26:27.173728 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 2 19:26:27.173751 kernel: Movable zone start for each node Oct 2 19:26:27.173766 kernel: Early memory node ranges Oct 2 19:26:27.173781 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Oct 2 19:26:27.173795 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 2 19:26:27.173810 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 2 19:26:27.173824 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 2 19:26:27.173838 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 2 19:26:27.173853 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 2 19:26:27.173867 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:26:27.173882 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 2 19:26:27.173896 kernel: psci: probing for conduit method from ACPI. Oct 2 19:26:27.173910 kernel: psci: PSCIv1.0 detected in firmware. Oct 2 19:26:27.173929 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:26:27.173944 kernel: psci: Trusted OS migration not required Oct 2 19:26:27.173965 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:26:27.173980 kernel: ACPI: SRAT not present Oct 2 19:26:27.173996 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:26:27.174016 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:26:27.174031 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 2 19:26:27.174047 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:26:27.174732 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:26:27.174750 kernel: CPU features: detected: Spectre-v2 Oct 2 19:26:27.174766 kernel: CPU features: detected: Spectre-v3a Oct 2 19:26:27.174781 kernel: CPU features: detected: Spectre-BHB Oct 2 19:26:27.174796 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:26:27.174811 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:26:27.174827 kernel: CPU features: detected: ARM erratum 1742098 Oct 2 19:26:27.174842 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 2 19:26:27.174865 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 2 19:26:27.174881 kernel: Policy zone: Normal Oct 2 19:26:27.174899 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:26:27.174915 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:26:27.174931 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:26:27.174967 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:26:27.174983 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:26:27.174999 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 2 19:26:27.175016 kernel: Memory: 3826444K/4030464K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 204020K reserved, 0K cma-reserved) Oct 2 19:26:27.175032 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:26:27.175073 kernel: trace event string verifier disabled Oct 2 19:26:27.175092 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:26:27.175108 kernel: rcu: RCU event tracing is enabled. Oct 2 19:26:27.175124 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:26:27.175140 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:26:27.175155 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:26:27.175171 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:26:27.175186 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:26:27.175201 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:26:27.175216 kernel: GICv3: 96 SPIs implemented Oct 2 19:26:27.175231 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:26:27.175246 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:26:27.175266 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:26:27.175281 kernel: GICv3: 16 PPIs implemented Oct 2 19:26:27.175296 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 2 19:26:27.175311 kernel: ACPI: SRAT not present Oct 2 19:26:27.175326 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 2 19:26:27.175341 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:26:27.175357 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:26:27.175372 kernel: GICv3: using LPI property table @0x00000004000c0000 Oct 2 19:26:27.175387 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 2 19:26:27.175402 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Oct 2 19:26:27.175417 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 2 19:26:27.175436 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 2 19:26:27.175452 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 2 19:26:27.175467 kernel: Console: colour dummy device 80x25 Oct 2 19:26:27.175483 kernel: printk: console [tty1] enabled Oct 2 19:26:27.175498 kernel: ACPI: Core revision 20210730 Oct 2 19:26:27.175514 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 2 19:26:27.175530 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:26:27.175546 kernel: LSM: Security Framework initializing Oct 2 19:26:27.175561 kernel: SELinux: Initializing. Oct 2 19:26:27.175577 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:26:27.175596 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:26:27.175612 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:26:27.175627 kernel: Platform MSI: ITS@0x10080000 domain created Oct 2 19:26:27.175643 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 2 19:26:27.175658 kernel: Remapping and enabling EFI services. Oct 2 19:26:27.175673 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:26:27.175689 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:26:27.175705 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 2 19:26:27.175721 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Oct 2 19:26:27.175740 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 2 19:26:27.175755 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:26:27.175771 kernel: SMP: Total of 2 processors activated. Oct 2 19:26:27.175786 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:26:27.175801 kernel: CPU features: detected: 32-bit EL1 Support Oct 2 19:26:27.175817 kernel: CPU features: detected: CRC32 instructions Oct 2 19:26:27.175832 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:26:27.175847 kernel: alternatives: patching kernel code Oct 2 19:26:27.175863 kernel: devtmpfs: initialized Oct 2 19:26:27.175882 kernel: KASLR disabled due to lack of seed Oct 2 19:26:27.175897 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:26:27.175913 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:26:27.175939 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:26:27.175959 kernel: SMBIOS 3.0.0 present. Oct 2 19:26:27.175975 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 2 19:26:27.175991 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:26:27.176007 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:26:27.176023 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:26:27.176039 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:26:27.176073 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:26:27.176092 kernel: audit: type=2000 audit(0.248:1): state=initialized audit_enabled=0 res=1 Oct 2 19:26:27.176114 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:26:27.176130 kernel: cpuidle: using governor menu Oct 2 19:26:27.176146 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:26:27.176163 kernel: ASID allocator initialised with 32768 entries Oct 2 19:26:27.176179 kernel: ACPI: bus type PCI registered Oct 2 19:26:27.176199 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:26:27.176215 kernel: Serial: AMBA PL011 UART driver Oct 2 19:26:27.176231 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:26:27.176247 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:26:27.176263 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:26:27.176280 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:26:27.176296 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:26:27.176312 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:26:27.176328 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:26:27.176348 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:26:27.176364 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:26:27.176380 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:26:27.176396 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:26:27.176412 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:26:27.176428 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:26:27.176444 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:26:27.176460 kernel: ACPI: Interpreter enabled Oct 2 19:26:27.176476 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:26:27.176496 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:26:27.176512 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 2 19:26:27.176930 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:26:27.186879 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:26:27.187162 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:26:27.187357 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 2 19:26:27.187548 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 2 19:26:27.187580 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 2 19:26:27.187597 kernel: acpiphp: Slot [1] registered Oct 2 19:26:27.187614 kernel: acpiphp: Slot [2] registered Oct 2 19:26:27.187631 kernel: acpiphp: Slot [3] registered Oct 2 19:26:27.187647 kernel: acpiphp: Slot [4] registered Oct 2 19:26:27.187663 kernel: acpiphp: Slot [5] registered Oct 2 19:26:27.187679 kernel: acpiphp: Slot [6] registered Oct 2 19:26:27.187695 kernel: acpiphp: Slot [7] registered Oct 2 19:26:27.187711 kernel: acpiphp: Slot [8] registered Oct 2 19:26:27.187731 kernel: acpiphp: Slot [9] registered Oct 2 19:26:27.187747 kernel: acpiphp: Slot [10] registered Oct 2 19:26:27.187764 kernel: acpiphp: Slot [11] registered Oct 2 19:26:27.187780 kernel: acpiphp: Slot [12] registered Oct 2 19:26:27.187796 kernel: acpiphp: Slot [13] registered Oct 2 19:26:27.187812 kernel: acpiphp: Slot [14] registered Oct 2 19:26:27.187828 kernel: acpiphp: Slot [15] registered Oct 2 19:26:27.187856 kernel: acpiphp: Slot [16] registered Oct 2 19:26:27.187877 kernel: acpiphp: Slot [17] registered Oct 2 19:26:27.187893 kernel: acpiphp: Slot [18] registered Oct 2 19:26:27.187915 kernel: acpiphp: Slot [19] registered Oct 2 19:26:27.187931 kernel: acpiphp: Slot [20] registered Oct 2 19:26:27.187946 kernel: acpiphp: Slot [21] registered Oct 2 19:26:27.187963 kernel: acpiphp: Slot [22] registered Oct 2 19:26:27.187978 kernel: acpiphp: Slot [23] registered Oct 2 19:26:27.187994 kernel: acpiphp: Slot [24] registered Oct 2 19:26:27.188010 kernel: acpiphp: Slot [25] registered Oct 2 19:26:27.188026 kernel: acpiphp: Slot [26] registered Oct 2 19:26:27.188042 kernel: acpiphp: Slot [27] registered Oct 2 19:26:27.188110 kernel: acpiphp: Slot [28] registered Oct 2 19:26:27.188129 kernel: acpiphp: Slot [29] registered Oct 2 19:26:27.188145 kernel: acpiphp: Slot [30] registered Oct 2 19:26:27.188161 kernel: acpiphp: Slot [31] registered Oct 2 19:26:27.188176 kernel: PCI host bridge to bus 0000:00 Oct 2 19:26:27.188384 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 2 19:26:27.188566 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:26:27.188744 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 2 19:26:27.188925 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 2 19:26:27.192276 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 2 19:26:27.192513 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 2 19:26:27.192712 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 2 19:26:27.192921 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 19:26:27.193147 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 2 19:26:27.193353 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:26:27.193561 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 19:26:27.193754 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 2 19:26:27.193946 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 2 19:26:27.194166 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 2 19:26:27.194361 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:26:27.194553 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 2 19:26:27.194756 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 2 19:26:27.194972 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 2 19:26:27.195212 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 2 19:26:27.195420 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 2 19:26:27.195605 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 2 19:26:27.195785 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:26:27.195963 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 2 19:26:27.195992 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:26:27.196009 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:26:27.196027 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:26:27.196043 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:26:27.196081 kernel: iommu: Default domain type: Translated Oct 2 19:26:27.196100 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:26:27.196116 kernel: vgaarb: loaded Oct 2 19:26:27.196133 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:26:27.196150 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:26:27.196172 kernel: PTP clock support registered Oct 2 19:26:27.196189 kernel: Registered efivars operations Oct 2 19:26:27.196205 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:26:27.196221 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:26:27.196237 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:26:27.196253 kernel: pnp: PnP ACPI init Oct 2 19:26:27.196473 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 2 19:26:27.196499 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:26:27.196515 kernel: NET: Registered PF_INET protocol family Oct 2 19:26:27.196536 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:26:27.196554 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:26:27.196570 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:26:27.196587 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:26:27.196603 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:26:27.196620 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:26:27.196636 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:26:27.196652 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:26:27.196669 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:26:27.196689 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:26:27.196705 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 2 19:26:27.196722 kernel: kvm [1]: HYP mode not available Oct 2 19:26:27.196738 kernel: Initialise system trusted keyrings Oct 2 19:26:27.196754 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:26:27.196771 kernel: Key type asymmetric registered Oct 2 19:26:27.196787 kernel: Asymmetric key parser 'x509' registered Oct 2 19:26:27.196803 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:26:27.196819 kernel: io scheduler mq-deadline registered Oct 2 19:26:27.196840 kernel: io scheduler kyber registered Oct 2 19:26:27.196856 kernel: io scheduler bfq registered Oct 2 19:26:27.205091 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 2 19:26:27.205146 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:26:27.205165 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:26:27.205182 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:26:27.205200 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 2 19:26:27.205468 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 2 19:26:27.205501 kernel: printk: console [ttyS0] disabled Oct 2 19:26:27.205519 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 2 19:26:27.205536 kernel: printk: console [ttyS0] enabled Oct 2 19:26:27.205553 kernel: printk: bootconsole [uart0] disabled Oct 2 19:26:27.205569 kernel: thunder_xcv, ver 1.0 Oct 2 19:26:27.205585 kernel: thunder_bgx, ver 1.0 Oct 2 19:26:27.205602 kernel: nicpf, ver 1.0 Oct 2 19:26:27.205618 kernel: nicvf, ver 1.0 Oct 2 19:26:27.205822 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:26:27.206012 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:26:26 UTC (1696274786) Oct 2 19:26:27.206036 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:26:27.206072 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:26:27.206092 kernel: Segment Routing with IPv6 Oct 2 19:26:27.206109 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:26:27.206125 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:26:27.206142 kernel: Key type dns_resolver registered Oct 2 19:26:27.206158 kernel: registered taskstats version 1 Oct 2 19:26:27.206180 kernel: Loading compiled-in X.509 certificates Oct 2 19:26:27.206197 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:26:27.206213 kernel: Key type .fscrypt registered Oct 2 19:26:27.206229 kernel: Key type fscrypt-provisioning registered Oct 2 19:26:27.206245 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:26:27.206261 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:26:27.206277 kernel: ima: No architecture policies found Oct 2 19:26:27.206293 kernel: Freeing unused kernel memory: 34560K Oct 2 19:26:27.206309 kernel: Run /init as init process Oct 2 19:26:27.206329 kernel: with arguments: Oct 2 19:26:27.206345 kernel: /init Oct 2 19:26:27.206361 kernel: with environment: Oct 2 19:26:27.206376 kernel: HOME=/ Oct 2 19:26:27.206392 kernel: TERM=linux Oct 2 19:26:27.206408 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:26:27.206429 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:26:27.206451 systemd[1]: Detected virtualization amazon. Oct 2 19:26:27.206473 systemd[1]: Detected architecture arm64. Oct 2 19:26:27.206490 systemd[1]: Running in initrd. Oct 2 19:26:27.206508 systemd[1]: No hostname configured, using default hostname. Oct 2 19:26:27.206525 systemd[1]: Hostname set to . Oct 2 19:26:27.206543 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:26:27.206561 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:26:27.206578 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:26:27.206596 systemd[1]: Reached target cryptsetup.target. Oct 2 19:26:27.206617 systemd[1]: Reached target paths.target. Oct 2 19:26:27.206634 systemd[1]: Reached target slices.target. Oct 2 19:26:27.206652 systemd[1]: Reached target swap.target. Oct 2 19:26:27.206669 systemd[1]: Reached target timers.target. Oct 2 19:26:27.206687 systemd[1]: Listening on iscsid.socket. Oct 2 19:26:27.206705 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:26:27.206723 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:26:27.206741 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:26:27.206762 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:26:27.206780 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:26:27.206798 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:26:27.206816 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:26:27.206833 systemd[1]: Reached target sockets.target. Oct 2 19:26:27.206851 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:26:27.206868 systemd[1]: Finished network-cleanup.service. Oct 2 19:26:27.206886 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:26:27.206904 systemd[1]: Starting systemd-journald.service... Oct 2 19:26:27.206925 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:26:27.206962 systemd[1]: Starting systemd-resolved.service... Oct 2 19:26:27.206983 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:26:27.207001 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:26:27.207019 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:26:27.207037 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:26:27.207074 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:26:27.207094 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:26:27.207112 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:26:27.207137 kernel: audit: type=1130 audit(1696274787.177:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.207159 systemd-journald[308]: Journal started Oct 2 19:26:27.207246 systemd-journald[308]: Runtime Journal (/run/log/journal/ec2ceddd2331b069e91336326430be91) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:26:27.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.166757 systemd-modules-load[309]: Inserted module 'overlay' Oct 2 19:26:27.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.220110 systemd[1]: Started systemd-journald.service. Oct 2 19:26:27.220144 kernel: audit: type=1130 audit(1696274787.208:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.226077 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:26:27.229565 systemd-resolved[310]: Positive Trust Anchors: Oct 2 19:26:27.229591 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:26:27.229644 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:26:27.248720 systemd-modules-load[309]: Inserted module 'br_netfilter' Oct 2 19:26:27.249330 kernel: Bridge firewalling registered Oct 2 19:26:27.275080 kernel: SCSI subsystem initialized Oct 2 19:26:27.276519 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:26:27.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.288757 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:26:27.298078 kernel: audit: type=1130 audit(1696274787.277:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.308966 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:26:27.309031 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:26:27.313084 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:26:27.329849 systemd-modules-load[309]: Inserted module 'dm_multipath' Oct 2 19:26:27.336548 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:26:27.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.348343 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:26:27.362102 kernel: audit: type=1130 audit(1696274787.339:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.362186 dracut-cmdline[326]: dracut-dracut-053 Oct 2 19:26:27.371352 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:26:27.396246 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:26:27.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.408385 kernel: audit: type=1130 audit(1696274787.398:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.616079 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:26:27.628078 kernel: iscsi: registered transport (tcp) Oct 2 19:26:27.654113 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:26:27.654183 kernel: QLogic iSCSI HBA Driver Oct 2 19:26:27.853131 kernel: random: crng init done Oct 2 19:26:27.853187 systemd-resolved[310]: Defaulting to hostname 'linux'. Oct 2 19:26:27.856990 systemd[1]: Started systemd-resolved.service. Oct 2 19:26:27.860646 systemd[1]: Reached target nss-lookup.target. Oct 2 19:26:27.871375 kernel: audit: type=1130 audit(1696274787.859:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.921378 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:26:27.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.925989 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:26:27.935273 kernel: audit: type=1130 audit(1696274787.923:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:28.020109 kernel: raid6: neonx8 gen() 6388 MB/s Oct 2 19:26:28.038086 kernel: raid6: neonx8 xor() 4696 MB/s Oct 2 19:26:28.056083 kernel: raid6: neonx4 gen() 6382 MB/s Oct 2 19:26:28.074084 kernel: raid6: neonx4 xor() 4889 MB/s Oct 2 19:26:28.092083 kernel: raid6: neonx2 gen() 5746 MB/s Oct 2 19:26:28.110083 kernel: raid6: neonx2 xor() 4513 MB/s Oct 2 19:26:28.128082 kernel: raid6: neonx1 gen() 4469 MB/s Oct 2 19:26:28.146082 kernel: raid6: neonx1 xor() 3654 MB/s Oct 2 19:26:28.164083 kernel: raid6: int64x8 gen() 3423 MB/s Oct 2 19:26:28.182082 kernel: raid6: int64x8 xor() 2089 MB/s Oct 2 19:26:28.200083 kernel: raid6: int64x4 gen() 3782 MB/s Oct 2 19:26:28.218082 kernel: raid6: int64x4 xor() 2189 MB/s Oct 2 19:26:28.236085 kernel: raid6: int64x2 gen() 3511 MB/s Oct 2 19:26:28.254082 kernel: raid6: int64x2 xor() 1944 MB/s Oct 2 19:26:28.272082 kernel: raid6: int64x1 gen() 2771 MB/s Oct 2 19:26:28.291673 kernel: raid6: int64x1 xor() 1451 MB/s Oct 2 19:26:28.291703 kernel: raid6: using algorithm neonx8 gen() 6388 MB/s Oct 2 19:26:28.291732 kernel: raid6: .... xor() 4696 MB/s, rmw enabled Oct 2 19:26:28.293532 kernel: raid6: using neon recovery algorithm Oct 2 19:26:28.312088 kernel: xor: measuring software checksum speed Oct 2 19:26:28.315085 kernel: 8regs : 9333 MB/sec Oct 2 19:26:28.317083 kernel: 32regs : 11107 MB/sec Oct 2 19:26:28.321431 kernel: arm64_neon : 9566 MB/sec Oct 2 19:26:28.321463 kernel: xor: using function: 32regs (11107 MB/sec) Oct 2 19:26:28.412095 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:26:28.450185 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:26:28.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:28.461103 kernel: audit: type=1130 audit(1696274788.450:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:28.460000 audit: BPF prog-id=7 op=LOAD Oct 2 19:26:28.462184 systemd[1]: Starting systemd-udevd.service... Oct 2 19:26:28.467249 kernel: audit: type=1334 audit(1696274788.460:10): prog-id=7 op=LOAD Oct 2 19:26:28.460000 audit: BPF prog-id=8 op=LOAD Oct 2 19:26:28.501304 systemd-udevd[508]: Using default interface naming scheme 'v252'. Oct 2 19:26:28.512380 systemd[1]: Started systemd-udevd.service. Oct 2 19:26:28.520920 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:26:28.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:28.582612 dracut-pre-trigger[524]: rd.md=0: removing MD RAID activation Oct 2 19:26:28.691728 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:26:28.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:28.694767 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:26:28.808512 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:26:28.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:28.965898 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:26:28.965967 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 2 19:26:28.974434 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 19:26:28.974731 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 19:26:28.979418 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 2 19:26:28.979511 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 19:26:28.986091 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:7b:84:21:ae:c9 Oct 2 19:26:28.992086 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 19:26:28.998129 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:26:28.998169 kernel: GPT:9289727 != 16777215 Oct 2 19:26:28.998192 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:26:29.000354 kernel: GPT:9289727 != 16777215 Oct 2 19:26:29.001677 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:26:29.003595 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:29.008709 (udev-worker)[565]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:26:29.113080 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (562) Oct 2 19:26:29.210860 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:26:29.324226 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:26:29.338124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:26:29.350265 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:26:29.355514 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:26:29.361000 systemd[1]: Starting disk-uuid.service... Oct 2 19:26:29.385093 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:29.385400 disk-uuid[674]: Primary Header is updated. Oct 2 19:26:29.385400 disk-uuid[674]: Secondary Entries is updated. Oct 2 19:26:29.385400 disk-uuid[674]: Secondary Header is updated. Oct 2 19:26:30.410778 disk-uuid[675]: The operation has completed successfully. Oct 2 19:26:30.413200 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:30.684591 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:26:30.701036 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 2 19:26:30.701113 kernel: audit: type=1130 audit(1696274790.683:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.701147 kernel: audit: type=1131 audit(1696274790.693:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.684798 systemd[1]: Finished disk-uuid.service. Oct 2 19:26:30.717039 systemd[1]: Starting verity-setup.service... Oct 2 19:26:30.770331 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:26:30.863640 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:26:30.868757 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:26:30.878495 systemd[1]: Finished verity-setup.service. Oct 2 19:26:30.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.889110 kernel: audit: type=1130 audit(1696274790.880:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.966080 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:26:30.968041 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:26:30.971274 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:26:30.975363 systemd[1]: Starting ignition-setup.service... Oct 2 19:26:30.999180 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:26:31.035328 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:26:31.035398 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:26:31.037602 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:26:31.055117 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:26:31.087189 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:26:31.130684 systemd[1]: Finished ignition-setup.service. Oct 2 19:26:31.152224 kernel: audit: type=1130 audit(1696274791.133:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.135938 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:26:31.362158 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:26:31.376950 kernel: audit: type=1130 audit(1696274791.363:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.376986 kernel: audit: type=1334 audit(1696274791.371:20): prog-id=9 op=LOAD Oct 2 19:26:31.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.371000 audit: BPF prog-id=9 op=LOAD Oct 2 19:26:31.377990 systemd[1]: Starting systemd-networkd.service... Oct 2 19:26:31.435372 systemd-networkd[1020]: lo: Link UP Oct 2 19:26:31.435395 systemd-networkd[1020]: lo: Gained carrier Oct 2 19:26:31.437963 systemd-networkd[1020]: Enumeration completed Oct 2 19:26:31.439133 systemd-networkd[1020]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:26:31.444689 systemd[1]: Started systemd-networkd.service. Oct 2 19:26:31.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.448226 systemd[1]: Reached target network.target. Oct 2 19:26:31.457598 kernel: audit: type=1130 audit(1696274791.447:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.459603 systemd-networkd[1020]: eth0: Link UP Oct 2 19:26:31.459623 systemd-networkd[1020]: eth0: Gained carrier Oct 2 19:26:31.464101 systemd[1]: Starting iscsiuio.service... Oct 2 19:26:31.483921 systemd[1]: Started iscsiuio.service. Oct 2 19:26:31.486975 systemd-networkd[1020]: eth0: DHCPv4 address 172.31.26.69/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:26:31.507410 kernel: audit: type=1130 audit(1696274791.488:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.506861 systemd[1]: Starting iscsid.service... Oct 2 19:26:31.521063 iscsid[1025]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:26:31.521063 iscsid[1025]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:26:31.521063 iscsid[1025]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:26:31.521063 iscsid[1025]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:26:31.521063 iscsid[1025]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:26:31.541750 iscsid[1025]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:26:31.553751 systemd[1]: Started iscsid.service. Oct 2 19:26:31.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.564090 kernel: audit: type=1130 audit(1696274791.554:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.566278 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:26:31.613638 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:26:31.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.616246 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:26:31.643085 kernel: audit: type=1130 audit(1696274791.614:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.626240 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:26:31.628206 systemd[1]: Reached target remote-fs.target. Oct 2 19:26:31.645853 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:26:31.679883 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:26:31.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.903202 ignition[943]: Ignition 2.14.0 Oct 2 19:26:31.904928 ignition[943]: Stage: fetch-offline Oct 2 19:26:31.906762 ignition[943]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:31.909358 ignition[943]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:31.926313 ignition[943]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:31.928623 ignition[943]: Ignition finished successfully Oct 2 19:26:31.932264 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:26:31.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.936272 systemd[1]: Starting ignition-fetch.service... Oct 2 19:26:31.967393 ignition[1044]: Ignition 2.14.0 Oct 2 19:26:31.967423 ignition[1044]: Stage: fetch Oct 2 19:26:31.967774 ignition[1044]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:31.967832 ignition[1044]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:31.982876 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:31.985253 ignition[1044]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:31.993468 ignition[1044]: INFO : PUT result: OK Oct 2 19:26:31.996871 ignition[1044]: DEBUG : parsed url from cmdline: "" Oct 2 19:26:31.996871 ignition[1044]: INFO : no config URL provided Oct 2 19:26:31.996871 ignition[1044]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:26:32.002947 ignition[1044]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 19:26:32.002947 ignition[1044]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:32.002947 ignition[1044]: INFO : PUT result: OK Oct 2 19:26:32.008977 ignition[1044]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 19:26:32.012377 ignition[1044]: INFO : GET result: OK Oct 2 19:26:32.013927 ignition[1044]: DEBUG : parsing config with SHA512: e0733b8d280f281496808fa401320f25363e9f45803026ae2230fe53b8f8300333265222c704a7226a84aba990881a9cb980e2c6c63982baeab578bca0709920 Oct 2 19:26:32.043222 unknown[1044]: fetched base config from "system" Oct 2 19:26:32.043751 unknown[1044]: fetched base config from "system" Oct 2 19:26:32.043771 unknown[1044]: fetched user config from "aws" Oct 2 19:26:32.049368 ignition[1044]: fetch: fetch complete Oct 2 19:26:32.049396 ignition[1044]: fetch: fetch passed Oct 2 19:26:32.049509 ignition[1044]: Ignition finished successfully Oct 2 19:26:32.056275 systemd[1]: Finished ignition-fetch.service. Oct 2 19:26:32.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:32.060706 systemd[1]: Starting ignition-kargs.service... Oct 2 19:26:32.093269 ignition[1050]: Ignition 2.14.0 Oct 2 19:26:32.093298 ignition[1050]: Stage: kargs Oct 2 19:26:32.093653 ignition[1050]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:32.093712 ignition[1050]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:32.109115 ignition[1050]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:32.111588 ignition[1050]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:32.114786 ignition[1050]: INFO : PUT result: OK Oct 2 19:26:32.119758 ignition[1050]: kargs: kargs passed Oct 2 19:26:32.119864 ignition[1050]: Ignition finished successfully Oct 2 19:26:32.124302 systemd[1]: Finished ignition-kargs.service. Oct 2 19:26:32.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:32.128817 systemd[1]: Starting ignition-disks.service... Oct 2 19:26:32.159208 ignition[1056]: Ignition 2.14.0 Oct 2 19:26:32.159236 ignition[1056]: Stage: disks Oct 2 19:26:32.159592 ignition[1056]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:32.159650 ignition[1056]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:32.176034 ignition[1056]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:32.178408 ignition[1056]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:32.181880 ignition[1056]: INFO : PUT result: OK Oct 2 19:26:32.187444 ignition[1056]: disks: disks passed Oct 2 19:26:32.187543 ignition[1056]: Ignition finished successfully Oct 2 19:26:32.191293 systemd[1]: Finished ignition-disks.service. Oct 2 19:26:32.194519 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:26:32.196437 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:26:32.199679 systemd[1]: Reached target local-fs.target. Oct 2 19:26:32.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:32.201350 systemd[1]: Reached target sysinit.target. Oct 2 19:26:32.204389 systemd[1]: Reached target basic.target. Oct 2 19:26:32.207390 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:26:32.269091 systemd-fsck[1064]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:26:32.279609 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:26:32.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:32.284003 systemd[1]: Mounting sysroot.mount... Oct 2 19:26:32.317117 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:26:32.317600 systemd[1]: Mounted sysroot.mount. Oct 2 19:26:32.320873 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:26:32.326321 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:26:32.330246 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:26:32.331539 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:26:32.331605 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:26:32.354466 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:26:32.367572 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:26:32.375020 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:26:32.396130 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1081) Oct 2 19:26:32.402155 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:26:32.402222 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:26:32.405657 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:26:32.409724 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:26:32.419092 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:26:32.425400 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:26:32.483877 initrd-setup-root[1112]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:26:32.502988 initrd-setup-root[1120]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:26:32.522077 initrd-setup-root[1128]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:26:32.752279 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:26:32.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:32.756277 systemd[1]: Starting ignition-mount.service... Oct 2 19:26:32.773470 systemd[1]: Starting sysroot-boot.service... Oct 2 19:26:32.794956 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:26:32.795160 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:26:32.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:32.828809 systemd[1]: Finished sysroot-boot.service. Oct 2 19:26:32.848894 ignition[1149]: INFO : Ignition 2.14.0 Oct 2 19:26:32.848894 ignition[1149]: INFO : Stage: mount Oct 2 19:26:32.852389 ignition[1149]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:32.852389 ignition[1149]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:32.870229 systemd-networkd[1020]: eth0: Gained IPv6LL Oct 2 19:26:32.872671 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:32.875357 ignition[1149]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:32.879033 ignition[1149]: INFO : PUT result: OK Oct 2 19:26:32.884001 ignition[1149]: INFO : mount: mount passed Oct 2 19:26:32.885720 ignition[1149]: INFO : Ignition finished successfully Oct 2 19:26:32.887146 systemd[1]: Finished ignition-mount.service. Oct 2 19:26:32.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:32.895806 systemd[1]: Starting ignition-files.service... Oct 2 19:26:32.919365 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:26:32.943093 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1157) Oct 2 19:26:32.949346 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:26:32.949390 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:26:32.951613 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:26:32.958071 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:26:32.963272 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:26:32.998308 ignition[1176]: INFO : Ignition 2.14.0 Oct 2 19:26:33.002001 ignition[1176]: INFO : Stage: files Oct 2 19:26:33.002001 ignition[1176]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:33.002001 ignition[1176]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:33.019630 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:33.022912 ignition[1176]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:33.026069 ignition[1176]: INFO : PUT result: OK Oct 2 19:26:33.030980 ignition[1176]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:26:33.035452 ignition[1176]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:26:33.035452 ignition[1176]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:26:33.095530 ignition[1176]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:26:33.098476 ignition[1176]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:26:33.101961 unknown[1176]: wrote ssh authorized keys file for user: core Oct 2 19:26:33.104475 ignition[1176]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:26:33.107768 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:26:33.111805 ignition[1176]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:26:33.501525 ignition[1176]: INFO : GET result: OK Oct 2 19:26:33.953781 ignition[1176]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:26:33.958932 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:26:33.958932 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Oct 2 19:26:33.958932 ignition[1176]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Oct 2 19:26:34.046615 ignition[1176]: INFO : GET result: OK Oct 2 19:26:34.289157 ignition[1176]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Oct 2 19:26:34.294844 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Oct 2 19:26:34.294844 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:26:34.294844 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:34.312823 ignition[1176]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1788101712" Oct 2 19:26:34.319409 ignition[1176]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1788101712": device or resource busy Oct 2 19:26:34.319409 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1788101712", trying btrfs: device or resource busy Oct 2 19:26:34.340889 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1179) Oct 2 19:26:34.340927 ignition[1176]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1788101712" Oct 2 19:26:34.340927 ignition[1176]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1788101712" Oct 2 19:26:34.356719 ignition[1176]: INFO : op(3): [started] unmounting "/mnt/oem1788101712" Oct 2 19:26:34.359141 ignition[1176]: INFO : op(3): [finished] unmounting "/mnt/oem1788101712" Oct 2 19:26:34.361580 systemd[1]: mnt-oem1788101712.mount: Deactivated successfully. Oct 2 19:26:34.364467 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:26:34.368387 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:26:34.371904 ignition[1176]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:26:34.448646 ignition[1176]: INFO : GET result: OK Oct 2 19:26:35.665819 ignition[1176]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Oct 2 19:26:35.671037 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:26:35.671037 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:26:35.671037 ignition[1176]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:26:35.723869 ignition[1176]: INFO : GET result: OK Oct 2 19:26:37.312806 ignition[1176]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Oct 2 19:26:37.318028 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:26:37.321485 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:26:37.325364 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:26:37.328994 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:26:37.332782 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:26:37.336424 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:26:37.341133 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:37.351260 ignition[1176]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2062825365" Oct 2 19:26:37.354231 ignition[1176]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2062825365": device or resource busy Oct 2 19:26:37.357522 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2062825365", trying btrfs: device or resource busy Oct 2 19:26:37.361129 ignition[1176]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2062825365" Oct 2 19:26:37.364840 ignition[1176]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2062825365" Oct 2 19:26:37.375385 ignition[1176]: INFO : op(6): [started] unmounting "/mnt/oem2062825365" Oct 2 19:26:37.381100 ignition[1176]: INFO : op(6): [finished] unmounting "/mnt/oem2062825365" Oct 2 19:26:37.381100 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:26:37.381100 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:26:37.381100 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:37.381043 systemd[1]: mnt-oem2062825365.mount: Deactivated successfully. Oct 2 19:26:37.405559 ignition[1176]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem606802739" Oct 2 19:26:37.408549 ignition[1176]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem606802739": device or resource busy Oct 2 19:26:37.408549 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem606802739", trying btrfs: device or resource busy Oct 2 19:26:37.408549 ignition[1176]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem606802739" Oct 2 19:26:37.420911 ignition[1176]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem606802739" Oct 2 19:26:37.420911 ignition[1176]: INFO : op(9): [started] unmounting "/mnt/oem606802739" Oct 2 19:26:37.420911 ignition[1176]: INFO : op(9): [finished] unmounting "/mnt/oem606802739" Oct 2 19:26:37.420911 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:26:37.420911 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:26:37.420911 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:37.446714 ignition[1176]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem973884654" Oct 2 19:26:37.446714 ignition[1176]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem973884654": device or resource busy Oct 2 19:26:37.446714 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem973884654", trying btrfs: device or resource busy Oct 2 19:26:37.446714 ignition[1176]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem973884654" Oct 2 19:26:37.472517 ignition[1176]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem973884654" Oct 2 19:26:37.475597 ignition[1176]: INFO : op(c): [started] unmounting "/mnt/oem973884654" Oct 2 19:26:37.482243 systemd[1]: mnt-oem973884654.mount: Deactivated successfully. Oct 2 19:26:37.485601 ignition[1176]: INFO : op(c): [finished] unmounting "/mnt/oem973884654" Oct 2 19:26:37.487916 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:26:37.487916 ignition[1176]: INFO : files: op(d): [started] processing unit "nvidia.service" Oct 2 19:26:37.487916 ignition[1176]: INFO : files: op(d): [finished] processing unit "nvidia.service" Oct 2 19:26:37.487916 ignition[1176]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:26:37.487916 ignition[1176]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:26:37.487916 ignition[1176]: INFO : files: op(f): [started] processing unit "amazon-ssm-agent.service" Oct 2 19:26:37.504870 ignition[1176]: INFO : files: op(f): op(10): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:26:37.504870 ignition[1176]: INFO : files: op(f): op(10): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:26:37.504870 ignition[1176]: INFO : files: op(f): [finished] processing unit "amazon-ssm-agent.service" Oct 2 19:26:37.504870 ignition[1176]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:26:37.518291 ignition[1176]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:26:37.518291 ignition[1176]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:26:37.518291 ignition[1176]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:26:37.518291 ignition[1176]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Oct 2 19:26:37.518291 ignition[1176]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:26:37.518291 ignition[1176]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:26:37.518291 ignition[1176]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Oct 2 19:26:37.518291 ignition[1176]: INFO : files: op(15): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:26:37.545364 ignition[1176]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:26:37.545364 ignition[1176]: INFO : files: op(16): [started] setting preset to enabled for "nvidia.service" Oct 2 19:26:37.545364 ignition[1176]: INFO : files: op(16): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:26:37.545364 ignition[1176]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:26:37.545364 ignition[1176]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:26:37.545364 ignition[1176]: INFO : files: op(18): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:26:37.545364 ignition[1176]: INFO : files: op(18): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:26:37.545364 ignition[1176]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:26:37.569161 ignition[1176]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:26:37.578804 ignition[1176]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:26:37.582540 ignition[1176]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:26:37.586180 ignition[1176]: INFO : files: files passed Oct 2 19:26:37.587804 ignition[1176]: INFO : Ignition finished successfully Oct 2 19:26:37.591374 systemd[1]: Finished ignition-files.service. Oct 2 19:26:37.596691 kernel: kauditd_printk_skb: 9 callbacks suppressed Oct 2 19:26:37.596728 kernel: audit: type=1130 audit(1696274797.593:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.607656 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:26:37.611873 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:26:37.628813 systemd[1]: Starting ignition-quench.service... Oct 2 19:26:37.644863 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:26:37.646926 systemd[1]: Finished ignition-quench.service. Oct 2 19:26:37.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.661299 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:26:37.668307 kernel: audit: type=1130 audit(1696274797.647:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.668344 kernel: audit: type=1131 audit(1696274797.647:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.669070 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:26:37.683090 kernel: audit: type=1130 audit(1696274797.670:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.671343 systemd[1]: Reached target ignition-complete.target. Oct 2 19:26:37.687841 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:26:37.738397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:26:37.739023 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:26:37.775015 kernel: audit: type=1130 audit(1696274797.739:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.775076 kernel: audit: type=1131 audit(1696274797.739:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.750003 systemd[1]: Reached target initrd-fs.target. Oct 2 19:26:37.759020 systemd[1]: Reached target initrd.target. Oct 2 19:26:37.762773 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:26:37.764257 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:26:37.807799 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:26:37.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.809667 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:26:37.826322 kernel: audit: type=1130 audit(1696274797.806:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.846039 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:26:37.865003 kernel: audit: type=1131 audit(1696274797.854:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.849407 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:26:37.851606 systemd[1]: Stopped target timers.target. Oct 2 19:26:37.853373 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:26:37.853667 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:26:37.863401 systemd[1]: Stopped target initrd.target. Oct 2 19:26:37.865707 systemd[1]: Stopped target basic.target. Oct 2 19:26:37.869413 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:26:37.872947 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:26:37.918996 kernel: audit: type=1131 audit(1696274797.906:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.875005 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:26:37.877373 systemd[1]: Stopped target remote-fs.target. Oct 2 19:26:37.880667 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:26:37.882633 systemd[1]: Stopped target sysinit.target. Oct 2 19:26:37.885141 systemd[1]: Stopped target local-fs.target. Oct 2 19:26:37.899726 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:26:37.902380 systemd[1]: Stopped target swap.target. Oct 2 19:26:37.904039 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:26:37.904345 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:26:37.912270 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:26:37.935387 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:26:37.948661 kernel: audit: type=1131 audit(1696274797.936:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.935666 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:26:37.978582 iscsid[1025]: iscsid shutting down. Oct 2 19:26:37.937642 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:26:37.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.937921 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:26:37.950951 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:26:37.951164 systemd[1]: Stopped ignition-files.service. Oct 2 19:26:37.954420 systemd[1]: Stopping ignition-mount.service... Oct 2 19:26:37.960649 systemd[1]: Stopping iscsid.service... Oct 2 19:26:37.965217 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:26:37.965568 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:26:37.969321 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:26:38.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:37.973552 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:26:37.973969 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:26:38.048110 ignition[1214]: INFO : Ignition 2.14.0 Oct 2 19:26:38.048110 ignition[1214]: INFO : Stage: umount Oct 2 19:26:38.048110 ignition[1214]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:38.048110 ignition[1214]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:37.976311 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:26:37.980387 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:26:38.074292 ignition[1214]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:38.074292 ignition[1214]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:38.074292 ignition[1214]: INFO : PUT result: OK Oct 2 19:26:37.993655 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:26:37.993880 systemd[1]: Stopped iscsid.service. Oct 2 19:26:38.004993 systemd[1]: Stopping iscsiuio.service... Oct 2 19:26:38.024132 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:26:38.024323 systemd[1]: Stopped iscsiuio.service. Oct 2 19:26:38.032717 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:26:38.032925 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:26:38.114937 ignition[1214]: INFO : umount: umount passed Oct 2 19:26:38.116732 ignition[1214]: INFO : Ignition finished successfully Oct 2 19:26:38.120265 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:26:38.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.120468 systemd[1]: Stopped ignition-mount.service. Oct 2 19:26:38.122389 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:26:38.122488 systemd[1]: Stopped ignition-disks.service. Oct 2 19:26:38.126485 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:26:38.126571 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:26:38.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.137020 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:26:38.137153 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:26:38.139902 systemd[1]: Stopped target network.target. Oct 2 19:26:38.147175 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:26:38.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.147271 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:26:38.149314 systemd[1]: Stopped target paths.target. Oct 2 19:26:38.156775 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:26:38.170938 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:26:38.173227 systemd[1]: Stopped target slices.target. Oct 2 19:26:38.174933 systemd[1]: Stopped target sockets.target. Oct 2 19:26:38.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.179482 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:26:38.179555 systemd[1]: Closed iscsid.socket. Oct 2 19:26:38.181005 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:26:38.181086 systemd[1]: Closed iscsiuio.socket. Oct 2 19:26:38.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.182554 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:26:38.182637 systemd[1]: Stopped ignition-setup.service. Oct 2 19:26:38.184640 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:26:38.186305 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:26:38.188272 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:26:38.188437 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:26:38.190647 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:26:38.190737 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:26:38.198358 systemd-networkd[1020]: eth0: DHCPv6 lease lost Oct 2 19:26:38.200126 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:26:38.200330 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:26:38.224881 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:26:38.227023 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:26:38.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.234159 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:26:38.234246 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:26:38.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.247000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:26:38.249000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:26:38.238925 systemd[1]: Stopping network-cleanup.service... Oct 2 19:26:38.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.240425 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:26:38.240535 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:26:38.246869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:26:38.246980 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:26:38.248935 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:26:38.250210 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:26:38.253152 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:26:38.275775 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:26:38.276499 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:26:38.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.289716 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:26:38.289806 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:26:38.295109 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:26:38.295189 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:26:38.301769 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:26:38.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.301878 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:26:38.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.306288 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:26:38.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.306385 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:26:38.310006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:26:38.310127 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:26:38.314772 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:26:38.325767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:26:38.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.325900 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:26:38.328815 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:26:38.329122 systemd[1]: Stopped network-cleanup.service. Oct 2 19:26:38.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.356342 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:26:38.356561 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:26:38.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:38.364415 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:26:38.364583 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:26:38.369594 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:26:38.374480 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:26:38.398865 systemd[1]: Switching root. Oct 2 19:26:38.427205 systemd-journald[308]: Journal stopped Oct 2 19:26:44.772263 systemd-journald[308]: Received SIGTERM from PID 1 (systemd). Oct 2 19:26:44.772830 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:26:44.773001 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:26:44.773037 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:26:44.774177 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:26:44.774215 kernel: SELinux: policy capability open_perms=1 Oct 2 19:26:44.774247 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:26:44.774277 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:26:44.774309 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:26:44.774359 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:26:44.774399 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:26:44.774431 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:26:44.774464 systemd[1]: Successfully loaded SELinux policy in 84.629ms. Oct 2 19:26:44.774679 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.775ms. Oct 2 19:26:44.774720 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:26:44.774755 systemd[1]: Detected virtualization amazon. Oct 2 19:26:44.774805 systemd[1]: Detected architecture arm64. Oct 2 19:26:44.774840 systemd[1]: Detected first boot. Oct 2 19:26:44.774877 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:26:44.774909 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:26:44.774943 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:26:44.774978 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:26:44.775172 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:26:44.775210 kernel: kauditd_printk_skb: 39 callbacks suppressed Oct 2 19:26:44.775239 kernel: audit: type=1334 audit(1696274804.258:83): prog-id=12 op=LOAD Oct 2 19:26:44.775269 kernel: audit: type=1334 audit(1696274804.258:84): prog-id=3 op=UNLOAD Oct 2 19:26:44.775303 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:26:44.775336 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:26:44.775365 kernel: audit: type=1334 audit(1696274804.260:85): prog-id=13 op=LOAD Oct 2 19:26:44.775394 kernel: audit: type=1334 audit(1696274804.263:86): prog-id=14 op=LOAD Oct 2 19:26:44.775424 kernel: audit: type=1334 audit(1696274804.263:87): prog-id=4 op=UNLOAD Oct 2 19:26:44.775452 kernel: audit: type=1334 audit(1696274804.263:88): prog-id=5 op=UNLOAD Oct 2 19:26:44.775483 kernel: audit: type=1131 audit(1696274804.265:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.775514 kernel: audit: type=1334 audit(1696274804.271:90): prog-id=12 op=UNLOAD Oct 2 19:26:44.775550 kernel: audit: type=1130 audit(1696274804.294:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.775581 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:26:44.775612 kernel: audit: type=1131 audit(1696274804.294:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.775644 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:26:44.775677 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:26:44.775709 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:26:44.775806 systemd[1]: Created slice system-getty.slice. Oct 2 19:26:44.775845 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:26:44.775876 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:26:44.775906 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:26:44.775936 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:26:44.775965 systemd[1]: Created slice user.slice. Oct 2 19:26:44.775997 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:26:44.776027 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:26:44.776074 systemd[1]: Set up automount boot.automount. Oct 2 19:26:44.776113 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:26:44.776148 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:26:44.776178 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:26:44.776209 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:26:44.776239 systemd[1]: Reached target integritysetup.target. Oct 2 19:26:44.776272 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:26:44.776304 systemd[1]: Reached target remote-fs.target. Oct 2 19:26:44.776335 systemd[1]: Reached target slices.target. Oct 2 19:26:44.776364 systemd[1]: Reached target swap.target. Oct 2 19:26:44.776396 systemd[1]: Reached target torcx.target. Oct 2 19:26:44.776430 systemd[1]: Reached target veritysetup.target. Oct 2 19:26:44.776461 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:26:44.776495 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:26:44.776528 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:26:44.776559 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:26:44.776591 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:26:44.776623 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:26:44.776653 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:26:44.776684 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:26:44.776714 systemd[1]: Mounting media.mount... Oct 2 19:26:44.776749 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:26:44.776780 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:26:44.776811 systemd[1]: Mounting tmp.mount... Oct 2 19:26:44.776842 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:26:44.776872 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:26:44.776904 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:26:44.776933 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:26:44.776965 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:26:44.776995 systemd[1]: Starting modprobe@drm.service... Oct 2 19:26:44.777029 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:26:44.778886 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:26:44.778933 systemd[1]: Starting modprobe@loop.service... Oct 2 19:26:44.778965 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:26:44.778996 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:26:44.779026 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:26:44.779096 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:26:44.779134 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:26:44.779173 systemd[1]: Stopped systemd-journald.service. Oct 2 19:26:44.779209 systemd[1]: Starting systemd-journald.service... Oct 2 19:26:44.779239 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:26:44.779271 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:26:44.779304 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:26:44.779334 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:26:44.779363 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:26:44.779394 systemd[1]: Stopped verity-setup.service. Oct 2 19:26:44.779424 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:26:44.781860 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:26:44.781899 systemd[1]: Mounted media.mount. Oct 2 19:26:44.781930 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:26:44.781962 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:26:44.781993 systemd[1]: Mounted tmp.mount. Oct 2 19:26:44.782022 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:26:44.782068 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:26:44.782106 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:26:44.782138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:26:44.782168 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:26:44.782204 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:26:44.782236 systemd[1]: Finished modprobe@drm.service. Oct 2 19:26:44.782268 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:26:44.782298 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:26:44.782328 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:26:44.782362 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:26:44.782392 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:26:44.782424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:26:44.782452 kernel: loop: module loaded Oct 2 19:26:44.782483 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:26:44.782511 kernel: fuse: init (API version 7.34) Oct 2 19:26:44.782543 systemd-journald[1325]: Journal started Oct 2 19:26:44.782705 systemd-journald[1325]: Runtime Journal (/run/log/journal/ec2ceddd2331b069e91336326430be91) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:26:39.219000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:26:39.436000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:26:39.436000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:26:39.436000 audit: BPF prog-id=10 op=LOAD Oct 2 19:26:39.436000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:26:39.436000 audit: BPF prog-id=11 op=LOAD Oct 2 19:26:39.436000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:26:44.258000 audit: BPF prog-id=12 op=LOAD Oct 2 19:26:44.258000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:26:44.820898 systemd[1]: Started systemd-journald.service. Oct 2 19:26:44.260000 audit: BPF prog-id=13 op=LOAD Oct 2 19:26:44.263000 audit: BPF prog-id=14 op=LOAD Oct 2 19:26:44.263000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:26:44.263000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:26:44.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.271000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:26:44.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.588000 audit: BPF prog-id=15 op=LOAD Oct 2 19:26:44.590000 audit: BPF prog-id=16 op=LOAD Oct 2 19:26:44.590000 audit: BPF prog-id=17 op=LOAD Oct 2 19:26:44.590000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:26:44.590000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:26:44.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.768000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:26:44.768000 audit[1325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe5280ce0 a2=4000 a3=1 items=0 ppid=1 pid=1325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:44.768000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:26:44.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.255750 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:26:39.643737 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:26:44.266336 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:26:39.654382 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:26:44.792772 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:26:39.654435 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:26:44.793723 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:26:39.654506 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:26:44.796135 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:26:39.654534 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:26:44.796441 systemd[1]: Finished modprobe@loop.service. Oct 2 19:26:39.654603 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:26:44.798826 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:26:39.654634 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:26:44.801485 systemd[1]: Reached target network-pre.target. Oct 2 19:26:39.655110 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:26:44.817522 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:26:39.655197 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:26:44.819590 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:26:39.655232 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:26:44.823758 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:26:39.656230 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:26:44.828040 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:26:39.656320 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:26:44.830158 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:26:39.656370 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:26:44.833921 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:26:39.656411 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:26:44.835763 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:26:39.656459 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:26:44.841191 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:26:39.656501 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:26:43.377944 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:43Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:43.378522 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:43Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:43.378758 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:43Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:43.379238 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:43Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:44.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.884557 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:26:43.379347 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:43Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:26:44.887133 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:26:43.379483 /usr/lib/systemd/system-generators/torcx-generator[1247]: time="2023-10-02T19:26:43Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:26:44.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.892618 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:26:44.915461 systemd-journald[1325]: Time spent on flushing to /var/log/journal/ec2ceddd2331b069e91336326430be91 is 61.831ms for 1136 entries. Oct 2 19:26:44.915461 systemd-journald[1325]: System Journal (/var/log/journal/ec2ceddd2331b069e91336326430be91) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:26:45.000878 systemd-journald[1325]: Received client request to flush runtime journal. Oct 2 19:26:45.003527 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:26:45.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.028521 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:26:45.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.032736 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:26:45.068168 udevadm[1360]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:26:45.073852 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:26:45.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.078348 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:26:45.153585 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:26:45.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.792377 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:26:45.794000 audit: BPF prog-id=18 op=LOAD Oct 2 19:26:45.794000 audit: BPF prog-id=19 op=LOAD Oct 2 19:26:45.794000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:26:45.794000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:26:45.796666 systemd[1]: Starting systemd-udevd.service... Oct 2 19:26:45.844473 systemd-udevd[1367]: Using default interface naming scheme 'v252'. Oct 2 19:26:45.887737 systemd[1]: Started systemd-udevd.service. Oct 2 19:26:45.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.890000 audit: BPF prog-id=20 op=LOAD Oct 2 19:26:45.896964 systemd[1]: Starting systemd-networkd.service... Oct 2 19:26:45.913000 audit: BPF prog-id=21 op=LOAD Oct 2 19:26:45.913000 audit: BPF prog-id=22 op=LOAD Oct 2 19:26:45.914000 audit: BPF prog-id=23 op=LOAD Oct 2 19:26:45.916280 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:26:46.003329 (udev-worker)[1373]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:26:46.040683 systemd[1]: Started systemd-userdbd.service. Oct 2 19:26:46.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:46.060101 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:26:46.250011 systemd-networkd[1377]: lo: Link UP Oct 2 19:26:46.250034 systemd-networkd[1377]: lo: Gained carrier Oct 2 19:26:46.250946 systemd-networkd[1377]: Enumeration completed Oct 2 19:26:46.251137 systemd[1]: Started systemd-networkd.service. Oct 2 19:26:46.251234 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:26:46.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:46.256847 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:26:46.265098 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:26:46.265421 systemd-networkd[1377]: eth0: Link UP Oct 2 19:26:46.265709 systemd-networkd[1377]: eth0: Gained carrier Oct 2 19:26:46.291396 systemd-networkd[1377]: eth0: DHCPv4 address 172.31.26.69/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:26:46.347082 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1373) Oct 2 19:26:46.553458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:26:46.559710 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:26:46.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:46.564752 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:26:46.632857 lvm[1486]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:26:46.672930 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:26:46.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:46.675143 systemd[1]: Reached target cryptsetup.target. Oct 2 19:26:46.679270 systemd[1]: Starting lvm2-activation.service... Oct 2 19:26:46.694002 lvm[1487]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:26:46.732131 systemd[1]: Finished lvm2-activation.service. Oct 2 19:26:46.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:46.734189 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:26:46.736037 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:26:46.736241 systemd[1]: Reached target local-fs.target. Oct 2 19:26:46.738276 systemd[1]: Reached target machines.target. Oct 2 19:26:46.742692 systemd[1]: Starting ldconfig.service... Oct 2 19:26:46.745046 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:26:46.745364 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:46.748404 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:26:46.753946 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:26:46.759098 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:26:46.761251 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:26:46.761380 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:26:46.765511 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:26:46.805986 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1489 (bootctl) Oct 2 19:26:46.808989 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:26:46.871768 systemd-tmpfiles[1492]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:26:46.885474 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:26:46.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:46.891505 systemd-tmpfiles[1492]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:26:46.920624 systemd-tmpfiles[1492]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:26:46.976158 systemd-fsck[1498]: fsck.fat 4.2 (2021-01-31) Oct 2 19:26:46.976158 systemd-fsck[1498]: /dev/nvme0n1p1: 236 files, 113463/258078 clusters Oct 2 19:26:46.983458 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:26:46.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:46.988403 systemd[1]: Mounting boot.mount... Oct 2 19:26:47.033320 systemd[1]: Mounted boot.mount. Oct 2 19:26:47.063914 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:26:47.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.250714 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:26:47.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.256307 systemd[1]: Starting audit-rules.service... Oct 2 19:26:47.261528 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:26:47.265809 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:26:47.268000 audit: BPF prog-id=24 op=LOAD Oct 2 19:26:47.276006 systemd[1]: Starting systemd-resolved.service... Oct 2 19:26:47.278000 audit: BPF prog-id=25 op=LOAD Oct 2 19:26:47.280978 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:26:47.287197 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:26:47.327493 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:26:47.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.329696 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:26:47.335000 audit[1517]: SYSTEM_BOOT pid=1517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.340081 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:26:47.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.385654 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:26:47.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.486913 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:26:47.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:47.489087 systemd[1]: Reached target time-set.target. Oct 2 19:26:47.503044 systemd-resolved[1515]: Positive Trust Anchors: Oct 2 19:26:47.503584 systemd-resolved[1515]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:26:47.503742 systemd-resolved[1515]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:26:47.524000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:26:47.524000 audit[1533]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdd189c00 a2=420 a3=0 items=0 ppid=1512 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:47.524000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:26:47.526492 augenrules[1533]: No rules Oct 2 19:26:47.528303 systemd[1]: Finished audit-rules.service. Oct 2 19:26:47.536436 systemd-resolved[1515]: Defaulting to hostname 'linux'. Oct 2 19:26:47.540292 systemd[1]: Started systemd-resolved.service. Oct 2 19:26:47.542390 systemd[1]: Reached target network.target. Oct 2 19:26:47.544218 systemd[1]: Reached target nss-lookup.target. Oct 2 19:26:47.720412 systemd-timesyncd[1516]: Contacted time server 198.60.22.240:123 (0.flatcar.pool.ntp.org). Oct 2 19:26:47.720536 systemd-timesyncd[1516]: Initial clock synchronization to Mon 2023-10-02 19:26:47.937250 UTC. Oct 2 19:26:47.974259 systemd-networkd[1377]: eth0: Gained IPv6LL Oct 2 19:26:47.979017 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:26:47.981322 systemd[1]: Reached target network-online.target. Oct 2 19:26:48.225526 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:26:48.228633 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:26:48.298963 ldconfig[1488]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:26:48.307149 systemd[1]: Finished ldconfig.service. Oct 2 19:26:48.311340 systemd[1]: Starting systemd-update-done.service... Oct 2 19:26:48.334116 systemd[1]: Finished systemd-update-done.service. Oct 2 19:26:48.336399 systemd[1]: Reached target sysinit.target. Oct 2 19:26:48.338399 systemd[1]: Started motdgen.path. Oct 2 19:26:48.340194 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:26:48.343112 systemd[1]: Started logrotate.timer. Oct 2 19:26:48.345155 systemd[1]: Started mdadm.timer. Oct 2 19:26:48.346829 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:26:48.348822 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:26:48.349040 systemd[1]: Reached target paths.target. Oct 2 19:26:48.350783 systemd[1]: Reached target timers.target. Oct 2 19:26:48.353237 systemd[1]: Listening on dbus.socket. Oct 2 19:26:48.357103 systemd[1]: Starting docker.socket... Oct 2 19:26:48.366709 systemd[1]: Listening on sshd.socket. Oct 2 19:26:48.368863 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:48.369970 systemd[1]: Listening on docker.socket. Oct 2 19:26:48.372054 systemd[1]: Reached target sockets.target. Oct 2 19:26:48.374063 systemd[1]: Reached target basic.target. Oct 2 19:26:48.376122 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:26:48.376315 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:26:48.389307 systemd[1]: Started amazon-ssm-agent.service. Oct 2 19:26:48.394303 systemd[1]: Starting containerd.service... Oct 2 19:26:48.401534 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:26:48.406962 systemd[1]: Starting dbus.service... Oct 2 19:26:48.410596 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:26:48.418611 systemd[1]: Starting extend-filesystems.service... Oct 2 19:26:48.420480 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:26:48.422913 systemd[1]: Starting motdgen.service... Oct 2 19:26:48.433296 systemd[1]: Started nvidia.service. Oct 2 19:26:48.439799 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:26:48.444033 systemd[1]: Starting prepare-critools.service... Oct 2 19:26:48.449396 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:26:48.453495 systemd[1]: Starting sshd-keygen.service... Oct 2 19:26:48.462772 systemd[1]: Starting systemd-logind.service... Oct 2 19:26:48.464492 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:48.464615 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:26:48.465482 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:26:48.467120 systemd[1]: Starting update-engine.service... Oct 2 19:26:48.471702 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:26:48.580653 jq[1552]: false Oct 2 19:26:48.579925 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:26:48.581041 jq[1562]: true Oct 2 19:26:48.580300 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:26:48.667511 tar[1564]: crictl Oct 2 19:26:48.669213 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:26:48.669590 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:26:48.677558 jq[1568]: true Oct 2 19:26:48.681333 tar[1565]: ./ Oct 2 19:26:48.684168 tar[1565]: ./macvlan Oct 2 19:26:48.687569 dbus-daemon[1551]: [system] SELinux support is enabled Oct 2 19:26:48.687853 systemd[1]: Started dbus.service. Oct 2 19:26:48.692831 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:26:48.692874 systemd[1]: Reached target system-config.target. Oct 2 19:26:48.694918 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:26:48.694952 systemd[1]: Reached target user-config.target. Oct 2 19:26:48.726964 dbus-daemon[1551]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1377 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:26:48.728993 dbus-daemon[1551]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 19:26:48.737462 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:26:48.857856 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:26:48.858250 systemd[1]: Finished motdgen.service. Oct 2 19:26:48.873374 amazon-ssm-agent[1548]: 2023/10/02 19:26:48 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 19:26:48.886057 amazon-ssm-agent[1548]: Initializing new seelog logger Oct 2 19:26:48.886323 amazon-ssm-agent[1548]: New Seelog Logger Creation Complete Oct 2 19:26:48.886493 amazon-ssm-agent[1548]: 2023/10/02 19:26:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:26:48.886493 amazon-ssm-agent[1548]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:26:48.888689 extend-filesystems[1553]: Found nvme0n1 Oct 2 19:26:48.890803 extend-filesystems[1553]: Found nvme0n1p1 Oct 2 19:26:48.890803 extend-filesystems[1553]: Found nvme0n1p2 Oct 2 19:26:48.890803 extend-filesystems[1553]: Found nvme0n1p3 Oct 2 19:26:48.890803 extend-filesystems[1553]: Found usr Oct 2 19:26:48.890803 extend-filesystems[1553]: Found nvme0n1p4 Oct 2 19:26:48.890803 extend-filesystems[1553]: Found nvme0n1p6 Oct 2 19:26:48.890803 extend-filesystems[1553]: Found nvme0n1p7 Oct 2 19:26:48.890803 extend-filesystems[1553]: Found nvme0n1p9 Oct 2 19:26:48.890803 extend-filesystems[1553]: Checking size of /dev/nvme0n1p9 Oct 2 19:26:48.931657 amazon-ssm-agent[1548]: 2023/10/02 19:26:48 processing appconfig overrides Oct 2 19:26:48.976142 update_engine[1561]: I1002 19:26:48.975636 1561 main.cc:92] Flatcar Update Engine starting Oct 2 19:26:49.009919 systemd[1]: Started update-engine.service. Oct 2 19:26:49.014774 systemd[1]: Started locksmithd.service. Oct 2 19:26:49.017637 update_engine[1561]: I1002 19:26:49.017592 1561 update_check_scheduler.cc:74] Next update check in 4m44s Oct 2 19:26:49.040347 extend-filesystems[1553]: Resized partition /dev/nvme0n1p9 Oct 2 19:26:49.063864 extend-filesystems[1618]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:26:49.093099 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 19:26:49.152208 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 19:26:49.194860 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:26:49.195992 systemd-logind[1560]: New seat seat0. Oct 2 19:26:49.197147 extend-filesystems[1618]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 19:26:49.197147 extend-filesystems[1618]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 19:26:49.197147 extend-filesystems[1618]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 19:26:49.211310 extend-filesystems[1553]: Resized filesystem in /dev/nvme0n1p9 Oct 2 19:26:49.208255 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:26:49.208650 systemd[1]: Finished extend-filesystems.service. Oct 2 19:26:49.215772 bash[1625]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:26:49.217503 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:26:49.221138 systemd[1]: Started systemd-logind.service. Oct 2 19:26:49.228145 tar[1565]: ./static Oct 2 19:26:49.229939 env[1571]: time="2023-10-02T19:26:49.229845259Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:26:49.341928 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:26:49.363286 tar[1565]: ./vlan Oct 2 19:26:49.388881 dbus-daemon[1551]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:26:49.389162 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:26:49.398966 dbus-daemon[1551]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1586 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:26:49.403961 systemd[1]: Starting polkit.service... Oct 2 19:26:49.469695 env[1571]: time="2023-10-02T19:26:49.469628384Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:26:49.470235 env[1571]: time="2023-10-02T19:26:49.470194923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:49.482692 env[1571]: time="2023-10-02T19:26:49.482622031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:49.483349 env[1571]: time="2023-10-02T19:26:49.483311935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:49.483924 env[1571]: time="2023-10-02T19:26:49.483873302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:49.488012 env[1571]: time="2023-10-02T19:26:49.487949296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:49.488520 env[1571]: time="2023-10-02T19:26:49.488458355Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:26:49.488762 env[1571]: time="2023-10-02T19:26:49.488730525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:49.489415 env[1571]: time="2023-10-02T19:26:49.489379434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:49.490728 env[1571]: time="2023-10-02T19:26:49.490669071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:49.491380 env[1571]: time="2023-10-02T19:26:49.491282870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:49.491553 env[1571]: time="2023-10-02T19:26:49.491523345Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:26:49.491936 env[1571]: time="2023-10-02T19:26:49.491879445Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:26:49.492118 env[1571]: time="2023-10-02T19:26:49.492089835Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:26:49.500533 env[1571]: time="2023-10-02T19:26:49.500438209Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:26:49.500787 env[1571]: time="2023-10-02T19:26:49.500754322Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:26:49.500916 env[1571]: time="2023-10-02T19:26:49.500885094Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:26:49.501123 env[1571]: time="2023-10-02T19:26:49.501090447Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:26:49.501279 env[1571]: time="2023-10-02T19:26:49.501248639Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:26:49.501418 env[1571]: time="2023-10-02T19:26:49.501387556Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:26:49.501541 env[1571]: time="2023-10-02T19:26:49.501511571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:26:49.502482 env[1571]: time="2023-10-02T19:26:49.502436569Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:26:49.502968 env[1571]: time="2023-10-02T19:26:49.502933196Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:26:49.503262 env[1571]: time="2023-10-02T19:26:49.503228216Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:26:49.503397 env[1571]: time="2023-10-02T19:26:49.503366543Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:26:49.503520 env[1571]: time="2023-10-02T19:26:49.503490841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:26:49.503866 env[1571]: time="2023-10-02T19:26:49.503835000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:26:49.504456 env[1571]: time="2023-10-02T19:26:49.504419991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:26:49.505260 env[1571]: time="2023-10-02T19:26:49.505222436Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:26:49.505427 env[1571]: time="2023-10-02T19:26:49.505395873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.505552 env[1571]: time="2023-10-02T19:26:49.505523181Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:26:49.505819 env[1571]: time="2023-10-02T19:26:49.505787599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.505975 env[1571]: time="2023-10-02T19:26:49.505945521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.506111 env[1571]: time="2023-10-02T19:26:49.506067079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.506234 env[1571]: time="2023-10-02T19:26:49.506203834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.506354 env[1571]: time="2023-10-02T19:26:49.506325232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.506475 env[1571]: time="2023-10-02T19:26:49.506446312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.510645 env[1571]: time="2023-10-02T19:26:49.510530168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.510631 polkitd[1638]: Started polkitd version 121 Oct 2 19:26:49.511348 env[1571]: time="2023-10-02T19:26:49.511309260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.511693 env[1571]: time="2023-10-02T19:26:49.511661674Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:26:49.512409 env[1571]: time="2023-10-02T19:26:49.512373690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.512564 env[1571]: time="2023-10-02T19:26:49.512522914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.512795 env[1571]: time="2023-10-02T19:26:49.512764483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.513102 env[1571]: time="2023-10-02T19:26:49.512891520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:26:49.513240 env[1571]: time="2023-10-02T19:26:49.513204464Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:26:49.513482 env[1571]: time="2023-10-02T19:26:49.513452212Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:26:49.513657 env[1571]: time="2023-10-02T19:26:49.513622234Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:26:49.513955 env[1571]: time="2023-10-02T19:26:49.513924023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:26:49.514530 env[1571]: time="2023-10-02T19:26:49.514424409Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:26:49.515896 env[1571]: time="2023-10-02T19:26:49.515169692Z" level=info msg="Connect containerd service" Oct 2 19:26:49.515896 env[1571]: time="2023-10-02T19:26:49.515241104Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:26:49.522621 env[1571]: time="2023-10-02T19:26:49.522560316Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:26:49.527409 env[1571]: time="2023-10-02T19:26:49.527316509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:26:49.528174 env[1571]: time="2023-10-02T19:26:49.528134531Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:26:49.528499 systemd[1]: Started containerd.service. Oct 2 19:26:49.531171 env[1571]: time="2023-10-02T19:26:49.531127471Z" level=info msg="containerd successfully booted in 0.304626s" Oct 2 19:26:49.540551 env[1571]: time="2023-10-02T19:26:49.540456162Z" level=info msg="Start subscribing containerd event" Oct 2 19:26:49.540695 env[1571]: time="2023-10-02T19:26:49.540576577Z" level=info msg="Start recovering state" Oct 2 19:26:49.540785 env[1571]: time="2023-10-02T19:26:49.540716403Z" level=info msg="Start event monitor" Oct 2 19:26:49.540918 env[1571]: time="2023-10-02T19:26:49.540875627Z" level=info msg="Start snapshots syncer" Oct 2 19:26:49.540987 env[1571]: time="2023-10-02T19:26:49.540918354Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:26:49.541054 env[1571]: time="2023-10-02T19:26:49.541017246Z" level=info msg="Start streaming server" Oct 2 19:26:49.561674 polkitd[1638]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:26:49.561801 polkitd[1638]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:26:49.587146 polkitd[1638]: Finished loading, compiling and executing 2 rules Oct 2 19:26:49.589187 dbus-daemon[1551]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:26:49.589508 systemd[1]: Started polkit.service. Oct 2 19:26:49.593042 polkitd[1638]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:26:49.634705 systemd-hostnamed[1586]: Hostname set to (transient) Oct 2 19:26:49.634804 systemd-resolved[1515]: System hostname changed to 'ip-172-31-26-69'. Oct 2 19:26:49.650925 coreos-metadata[1550]: Oct 02 19:26:49.650 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:26:49.653097 coreos-metadata[1550]: Oct 02 19:26:49.653 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 19:26:49.654569 coreos-metadata[1550]: Oct 02 19:26:49.654 INFO Fetch successful Oct 2 19:26:49.654698 coreos-metadata[1550]: Oct 02 19:26:49.654 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:26:49.656103 coreos-metadata[1550]: Oct 02 19:26:49.656 INFO Fetch successful Oct 2 19:26:49.658697 tar[1565]: ./portmap Oct 2 19:26:49.662278 unknown[1550]: wrote ssh authorized keys file for user: core Oct 2 19:26:49.698460 update-ssh-keys[1664]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:26:49.699403 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:26:49.778668 tar[1565]: ./host-local Oct 2 19:26:49.901846 tar[1565]: ./vrf Oct 2 19:26:49.965194 amazon-ssm-agent[1548]: 2023-10-02 19:26:49 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-01e1930281a688ee2 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-01e1930281a688ee2 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:26:49.965194 amazon-ssm-agent[1548]: status code: 400, request id: a153c0a5-f31b-4609-8a5c-b4dc2d24e63f Oct 2 19:26:49.965194 amazon-ssm-agent[1548]: 2023-10-02 19:26:49 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 19:26:50.014195 tar[1565]: ./bridge Oct 2 19:26:50.129184 tar[1565]: ./tuning Oct 2 19:26:50.234902 tar[1565]: ./firewall Oct 2 19:26:50.368675 tar[1565]: ./host-device Oct 2 19:26:50.511629 tar[1565]: ./sbr Oct 2 19:26:50.636802 tar[1565]: ./loopback Oct 2 19:26:50.744529 tar[1565]: ./dhcp Oct 2 19:26:50.884989 systemd[1]: Finished prepare-critools.service. Oct 2 19:26:50.953685 tar[1565]: ./ptp Oct 2 19:26:51.016830 tar[1565]: ./ipvlan Oct 2 19:26:51.077948 tar[1565]: ./bandwidth Oct 2 19:26:51.163920 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:26:51.274050 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:26:53.634403 sshd_keygen[1585]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:26:53.694247 systemd[1]: Finished sshd-keygen.service. Oct 2 19:26:53.699090 systemd[1]: Starting issuegen.service... Oct 2 19:26:53.719488 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:26:53.719844 systemd[1]: Finished issuegen.service. Oct 2 19:26:53.724491 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:26:53.751988 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:26:53.757539 systemd[1]: Started getty@tty1.service. Oct 2 19:26:53.762241 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:26:53.764754 systemd[1]: Reached target getty.target. Oct 2 19:26:53.767602 systemd[1]: Reached target multi-user.target. Oct 2 19:26:53.772833 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:26:53.796474 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:26:53.796833 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:26:53.799344 systemd[1]: Startup finished in 1.186s (kernel) + 12.449s (initrd) + 14.748s (userspace) = 28.384s. Oct 2 19:26:57.478605 systemd[1]: Created slice system-sshd.slice. Oct 2 19:26:57.480965 systemd[1]: Started sshd@0-172.31.26.69:22-139.178.89.65:36646.service. Oct 2 19:26:57.671812 sshd[1758]: Accepted publickey for core from 139.178.89.65 port 36646 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:57.677050 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:57.695051 systemd[1]: Created slice user-500.slice. Oct 2 19:26:57.698240 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:26:57.704805 systemd-logind[1560]: New session 1 of user core. Oct 2 19:26:57.724378 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:26:57.729345 systemd[1]: Starting user@500.service... Oct 2 19:26:57.739352 (systemd)[1761]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:57.941468 systemd[1761]: Queued start job for default target default.target. Oct 2 19:26:57.942949 systemd[1761]: Reached target paths.target. Oct 2 19:26:57.943218 systemd[1761]: Reached target sockets.target. Oct 2 19:26:57.943388 systemd[1761]: Reached target timers.target. Oct 2 19:26:57.943601 systemd[1761]: Reached target basic.target. Oct 2 19:26:57.943844 systemd[1761]: Reached target default.target. Oct 2 19:26:57.943935 systemd[1]: Started user@500.service. Oct 2 19:26:57.944153 systemd[1761]: Startup finished in 186ms. Oct 2 19:26:57.945888 systemd[1]: Started session-1.scope. Oct 2 19:26:58.098647 systemd[1]: Started sshd@1-172.31.26.69:22-139.178.89.65:36658.service. Oct 2 19:26:58.291487 sshd[1770]: Accepted publickey for core from 139.178.89.65 port 36658 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:58.295266 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:58.304200 systemd-logind[1560]: New session 2 of user core. Oct 2 19:26:58.304392 systemd[1]: Started session-2.scope. Oct 2 19:26:58.453363 sshd[1770]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:58.459133 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:26:58.460229 systemd[1]: sshd@1-172.31.26.69:22-139.178.89.65:36658.service: Deactivated successfully. Oct 2 19:26:58.461773 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:26:58.463445 systemd-logind[1560]: Removed session 2. Oct 2 19:26:58.483247 systemd[1]: Started sshd@2-172.31.26.69:22-139.178.89.65:36660.service. Oct 2 19:26:58.666847 sshd[1776]: Accepted publickey for core from 139.178.89.65 port 36660 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:58.670977 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:58.679460 systemd-logind[1560]: New session 3 of user core. Oct 2 19:26:58.680375 systemd[1]: Started session-3.scope. Oct 2 19:26:58.812894 sshd[1776]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:58.818826 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:26:58.819411 systemd[1]: sshd@2-172.31.26.69:22-139.178.89.65:36660.service: Deactivated successfully. Oct 2 19:26:58.820562 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:26:58.822112 systemd-logind[1560]: Removed session 3. Oct 2 19:26:58.845591 systemd[1]: Started sshd@3-172.31.26.69:22-139.178.89.65:36668.service. Oct 2 19:26:59.036788 sshd[1782]: Accepted publickey for core from 139.178.89.65 port 36668 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:59.040558 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:59.049224 systemd[1]: Started session-4.scope. Oct 2 19:26:59.050112 systemd-logind[1560]: New session 4 of user core. Oct 2 19:26:59.198460 sshd[1782]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:59.204110 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:26:59.204730 systemd[1]: sshd@3-172.31.26.69:22-139.178.89.65:36668.service: Deactivated successfully. Oct 2 19:26:59.205922 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:26:59.207514 systemd-logind[1560]: Removed session 4. Oct 2 19:26:59.229544 systemd[1]: Started sshd@4-172.31.26.69:22-139.178.89.65:36672.service. Oct 2 19:26:59.420592 sshd[1788]: Accepted publickey for core from 139.178.89.65 port 36672 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:59.423843 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:59.432228 systemd-logind[1560]: New session 5 of user core. Oct 2 19:26:59.433204 systemd[1]: Started session-5.scope. Oct 2 19:26:59.604097 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:26:59.605288 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:26:59.621327 dbus-daemon[1551]: avc: received setenforce notice (enforcing=1) Oct 2 19:26:59.624053 sudo[1791]: pam_unix(sudo:session): session closed for user root Oct 2 19:26:59.649243 sshd[1788]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:59.656217 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:26:59.657665 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:26:59.658019 systemd[1]: sshd@4-172.31.26.69:22-139.178.89.65:36672.service: Deactivated successfully. Oct 2 19:26:59.660463 systemd-logind[1560]: Removed session 5. Oct 2 19:26:59.681308 systemd[1]: Started sshd@5-172.31.26.69:22-139.178.89.65:36684.service. Oct 2 19:26:59.866963 sshd[1795]: Accepted publickey for core from 139.178.89.65 port 36684 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:59.870592 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:59.880010 systemd[1]: Started session-6.scope. Oct 2 19:26:59.881041 systemd-logind[1560]: New session 6 of user core. Oct 2 19:27:00.002541 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:27:00.003195 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:27:00.010580 sudo[1799]: pam_unix(sudo:session): session closed for user root Oct 2 19:27:00.024175 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:27:00.025180 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:27:00.049455 systemd[1]: Stopping audit-rules.service... Oct 2 19:27:00.053000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:27:00.055004 auditctl[1802]: No rules Oct 2 19:27:00.056460 kernel: kauditd_printk_skb: 65 callbacks suppressed Oct 2 19:27:00.056543 kernel: audit: type=1305 audit(1696274820.053:154): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:27:00.060670 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:27:00.061238 systemd[1]: Stopped audit-rules.service. Oct 2 19:27:00.053000 audit[1802]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdbc54400 a2=420 a3=0 items=0 ppid=1 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.064784 systemd[1]: Starting audit-rules.service... Oct 2 19:27:00.073732 kernel: audit: type=1300 audit(1696274820.053:154): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdbc54400 a2=420 a3=0 items=0 ppid=1 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.053000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:27:00.077905 kernel: audit: type=1327 audit(1696274820.053:154): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:27:00.077994 kernel: audit: type=1131 audit(1696274820.060:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.130728 augenrules[1819]: No rules Oct 2 19:27:00.132249 systemd[1]: Finished audit-rules.service. Oct 2 19:27:00.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.141582 sudo[1798]: pam_unix(sudo:session): session closed for user root Oct 2 19:27:00.141000 audit[1798]: USER_END pid=1798 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.152525 kernel: audit: type=1130 audit(1696274820.132:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.152599 kernel: audit: type=1106 audit(1696274820.141:157): pid=1798 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.141000 audit[1798]: CRED_DISP pid=1798 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.161344 kernel: audit: type=1104 audit(1696274820.141:158): pid=1798 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.165401 sshd[1795]: pam_unix(sshd:session): session closed for user core Oct 2 19:27:00.166000 audit[1795]: USER_END pid=1795 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.166000 audit[1795]: CRED_DISP pid=1795 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.182391 systemd[1]: sshd@5-172.31.26.69:22-139.178.89.65:36684.service: Deactivated successfully. Oct 2 19:27:00.183522 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:27:00.190348 kernel: audit: type=1106 audit(1696274820.166:159): pid=1795 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.190468 kernel: audit: type=1104 audit(1696274820.166:160): pid=1795 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.26.69:22-139.178.89.65:36684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.191676 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:27:00.205456 kernel: audit: type=1131 audit(1696274820.181:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.26.69:22-139.178.89.65:36684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.201192 systemd[1]: Started sshd@6-172.31.26.69:22-139.178.89.65:36688.service. Oct 2 19:27:00.205155 systemd-logind[1560]: Removed session 6. Oct 2 19:27:00.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.26.69:22-139.178.89.65:36688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.388000 audit[1825]: USER_ACCT pid=1825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.389722 sshd[1825]: Accepted publickey for core from 139.178.89.65 port 36688 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:27:00.391000 audit[1825]: CRED_ACQ pid=1825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.392000 audit[1825]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd911d570 a2=3 a3=1 items=0 ppid=1 pid=1825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.392000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:27:00.394174 sshd[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:27:00.402915 systemd[1]: Started session-7.scope. Oct 2 19:27:00.403856 systemd-logind[1560]: New session 7 of user core. Oct 2 19:27:00.411000 audit[1825]: USER_START pid=1825 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.418000 audit[1827]: CRED_ACQ pid=1827 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.523000 audit[1828]: USER_ACCT pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.524579 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:27:00.523000 audit[1828]: CRED_REFR pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.525119 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:27:00.527000 audit[1828]: USER_START pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:01.199155 systemd[1]: Reloading. Oct 2 19:27:01.372541 /usr/lib/systemd/system-generators/torcx-generator[1857]: time="2023-10-02T19:27:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:27:01.380389 /usr/lib/systemd/system-generators/torcx-generator[1857]: time="2023-10-02T19:27:01Z" level=info msg="torcx already run" Oct 2 19:27:01.609417 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:27:01.609458 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:27:01.647872 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:27:01.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.806000 audit: BPF prog-id=34 op=LOAD Oct 2 19:27:01.806000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:27:01.808000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.808000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.809000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.809000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.809000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.809000 audit: BPF prog-id=35 op=LOAD Oct 2 19:27:01.809000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit: BPF prog-id=36 op=LOAD Oct 2 19:27:01.816000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit: BPF prog-id=37 op=LOAD Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.817000 audit: BPF prog-id=38 op=LOAD Oct 2 19:27:01.817000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:27:01.817000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:27:01.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit: BPF prog-id=39 op=LOAD Oct 2 19:27:01.820000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit: BPF prog-id=40 op=LOAD Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.820000 audit: BPF prog-id=41 op=LOAD Oct 2 19:27:01.820000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:27:01.820000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.821000 audit: BPF prog-id=42 op=LOAD Oct 2 19:27:01.821000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:27:01.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit: BPF prog-id=43 op=LOAD Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.822000 audit: BPF prog-id=44 op=LOAD Oct 2 19:27:01.822000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:27:01.822000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:27:01.823000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.823000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.823000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.823000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.823000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.823000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.823000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.823000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.823000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.824000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.824000 audit: BPF prog-id=45 op=LOAD Oct 2 19:27:01.824000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:27:01.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit: BPF prog-id=46 op=LOAD Oct 2 19:27:01.826000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit: BPF prog-id=47 op=LOAD Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.827000 audit: BPF prog-id=48 op=LOAD Oct 2 19:27:01.827000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:27:01.827000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:27:01.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.830000 audit: BPF prog-id=49 op=LOAD Oct 2 19:27:01.830000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:27:01.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit: BPF prog-id=50 op=LOAD Oct 2 19:27:01.831000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.831000 audit: BPF prog-id=51 op=LOAD Oct 2 19:27:01.831000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:27:01.831000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:27:01.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:01.850575 systemd[1]: Started kubelet.service. Oct 2 19:27:01.885795 systemd[1]: Starting coreos-metadata.service... Oct 2 19:27:02.038400 kubelet[1912]: E1002 19:27:02.038310 1912 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Oct 2 19:27:02.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:27:02.044524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:27:02.044839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:27:02.082228 coreos-metadata[1915]: Oct 02 19:27:02.082 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:27:02.083293 coreos-metadata[1915]: Oct 02 19:27:02.083 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 19:27:02.083933 coreos-metadata[1915]: Oct 02 19:27:02.083 INFO Fetch successful Oct 2 19:27:02.083933 coreos-metadata[1915]: Oct 02 19:27:02.083 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 19:27:02.084671 coreos-metadata[1915]: Oct 02 19:27:02.084 INFO Fetch successful Oct 2 19:27:02.084671 coreos-metadata[1915]: Oct 02 19:27:02.084 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 19:27:02.085396 coreos-metadata[1915]: Oct 02 19:27:02.085 INFO Fetch successful Oct 2 19:27:02.085396 coreos-metadata[1915]: Oct 02 19:27:02.085 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 19:27:02.085939 coreos-metadata[1915]: Oct 02 19:27:02.085 INFO Fetch successful Oct 2 19:27:02.085939 coreos-metadata[1915]: Oct 02 19:27:02.085 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 19:27:02.089253 coreos-metadata[1915]: Oct 02 19:27:02.089 INFO Fetch successful Oct 2 19:27:02.089253 coreos-metadata[1915]: Oct 02 19:27:02.089 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 19:27:02.090009 coreos-metadata[1915]: Oct 02 19:27:02.089 INFO Fetch successful Oct 2 19:27:02.090009 coreos-metadata[1915]: Oct 02 19:27:02.089 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 19:27:02.090699 coreos-metadata[1915]: Oct 02 19:27:02.090 INFO Fetch successful Oct 2 19:27:02.090699 coreos-metadata[1915]: Oct 02 19:27:02.090 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 19:27:02.091357 coreos-metadata[1915]: Oct 02 19:27:02.091 INFO Fetch successful Oct 2 19:27:02.112876 systemd[1]: Finished coreos-metadata.service. Oct 2 19:27:02.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:02.634222 systemd[1]: Stopped kubelet.service. Oct 2 19:27:02.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:02.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:02.679445 systemd[1]: Reloading. Oct 2 19:27:02.863783 /usr/lib/systemd/system-generators/torcx-generator[1976]: time="2023-10-02T19:27:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:27:02.863848 /usr/lib/systemd/system-generators/torcx-generator[1976]: time="2023-10-02T19:27:02Z" level=info msg="torcx already run" Oct 2 19:27:03.089874 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:27:03.089918 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:27:03.128788 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:27:03.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.284000 audit: BPF prog-id=52 op=LOAD Oct 2 19:27:03.284000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:27:03.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.288000 audit: BPF prog-id=53 op=LOAD Oct 2 19:27:03.288000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:27:03.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit: BPF prog-id=54 op=LOAD Oct 2 19:27:03.295000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit: BPF prog-id=55 op=LOAD Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.296000 audit: BPF prog-id=56 op=LOAD Oct 2 19:27:03.296000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:27:03.296000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:27:03.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.298000 audit: BPF prog-id=57 op=LOAD Oct 2 19:27:03.298000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit: BPF prog-id=58 op=LOAD Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.299000 audit: BPF prog-id=59 op=LOAD Oct 2 19:27:03.299000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:27:03.299000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit: BPF prog-id=60 op=LOAD Oct 2 19:27:03.300000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit: BPF prog-id=61 op=LOAD Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.301000 audit: BPF prog-id=62 op=LOAD Oct 2 19:27:03.301000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:27:03.301000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:27:03.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.302000 audit: BPF prog-id=63 op=LOAD Oct 2 19:27:03.302000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:27:03.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.304000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.304000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit: BPF prog-id=64 op=LOAD Oct 2 19:27:03.305000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit: BPF prog-id=65 op=LOAD Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.306000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.306000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.306000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.306000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.306000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.306000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.306000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.306000 audit: BPF prog-id=66 op=LOAD Oct 2 19:27:03.306000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:27:03.306000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:27:03.308000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.308000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.308000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.308000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.308000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.308000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.308000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.308000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.308000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.308000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit: BPF prog-id=67 op=LOAD Oct 2 19:27:03.309000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit: BPF prog-id=68 op=LOAD Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.309000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.310000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.310000 audit: BPF prog-id=69 op=LOAD Oct 2 19:27:03.310000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:27:03.310000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:27:03.344849 systemd[1]: Started kubelet.service. Oct 2 19:27:03.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:03.498480 kubelet[2032]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:27:03.499030 kubelet[2032]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:27:03.499500 kubelet[2032]: I1002 19:27:03.499450 2032 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:27:03.501839 kubelet[2032]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:27:03.501985 kubelet[2032]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:27:04.310518 kubelet[2032]: I1002 19:27:04.310474 2032 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Oct 2 19:27:04.310741 kubelet[2032]: I1002 19:27:04.310715 2032 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:27:04.311499 kubelet[2032]: I1002 19:27:04.311461 2032 server.go:836] "Client rotation is on, will bootstrap in background" Oct 2 19:27:04.320290 kubelet[2032]: I1002 19:27:04.320209 2032 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:27:04.323688 kubelet[2032]: W1002 19:27:04.323639 2032 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:27:04.325101 kubelet[2032]: I1002 19:27:04.325039 2032 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:27:04.325801 kubelet[2032]: I1002 19:27:04.325761 2032 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:27:04.325909 kubelet[2032]: I1002 19:27:04.325887 2032 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:27:04.326106 kubelet[2032]: I1002 19:27:04.325976 2032 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:27:04.326106 kubelet[2032]: I1002 19:27:04.326002 2032 container_manager_linux.go:308] "Creating device plugin manager" Oct 2 19:27:04.326323 kubelet[2032]: I1002 19:27:04.326296 2032 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:27:04.332994 kubelet[2032]: I1002 19:27:04.332937 2032 kubelet.go:398] "Attempting to sync node with API server" Oct 2 19:27:04.332994 kubelet[2032]: I1002 19:27:04.332988 2032 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:27:04.333236 kubelet[2032]: I1002 19:27:04.333132 2032 kubelet.go:297] "Adding apiserver pod source" Oct 2 19:27:04.333236 kubelet[2032]: I1002 19:27:04.333160 2032 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:27:04.335426 kubelet[2032]: I1002 19:27:04.335376 2032 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:27:04.336623 kubelet[2032]: W1002 19:27:04.336582 2032 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:27:04.337476 kubelet[2032]: E1002 19:27:04.337444 2032 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:04.337720 kubelet[2032]: I1002 19:27:04.337461 2032 server.go:1186] "Started kubelet" Oct 2 19:27:04.338477 kubelet[2032]: E1002 19:27:04.338433 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:04.340000 audit[2032]: AVC avc: denied { mac_admin } for pid=2032 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:04.340000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:04.340000 audit[2032]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bbc3f0 a1=4000883068 a2=4000bbc3c0 a3=25 items=0 ppid=1 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.340000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:04.341000 audit[2032]: AVC avc: denied { mac_admin } for pid=2032 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:04.341000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:04.341000 audit[2032]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40004fe540 a1=4000883080 a2=4000bbc480 a3=25 items=0 ppid=1 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.341000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:04.342877 kubelet[2032]: I1002 19:27:04.342049 2032 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:27:04.342877 kubelet[2032]: I1002 19:27:04.342156 2032 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:27:04.342877 kubelet[2032]: I1002 19:27:04.342278 2032 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:27:04.343135 kubelet[2032]: E1002 19:27:04.343101 2032 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:27:04.343213 kubelet[2032]: E1002 19:27:04.343151 2032 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:27:04.348790 kubelet[2032]: I1002 19:27:04.348751 2032 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:27:04.350238 kubelet[2032]: I1002 19:27:04.350205 2032 server.go:451] "Adding debug handlers to kubelet server" Oct 2 19:27:04.361459 kubelet[2032]: I1002 19:27:04.361425 2032 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:27:04.363150 kubelet[2032]: W1002 19:27:04.361781 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:04.363537 kubelet[2032]: E1002 19:27:04.363513 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:04.363660 kubelet[2032]: E1002 19:27:04.361859 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72a120d0a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 337427722, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 337427722, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.364370 kubelet[2032]: W1002 19:27:04.362266 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.26.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:04.369207 kubelet[2032]: E1002 19:27:04.369174 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.26.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:04.370752 kubelet[2032]: I1002 19:27:04.364664 2032 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:27:04.370971 kubelet[2032]: E1002 19:27:04.366798 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72a690cc0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 343129280, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 343129280, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.372312 kubelet[2032]: W1002 19:27:04.369968 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:04.372514 kubelet[2032]: E1002 19:27:04.370151 2032 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.26.69" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:04.372656 kubelet[2032]: E1002 19:27:04.372633 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:04.406000 audit[2045]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.406000 audit[2045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd7d6b020 a2=0 a3=1 items=0 ppid=2032 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.406000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:27:04.415039 kubelet[2032]: E1002 19:27:04.414902 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9a6edc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.69 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413474524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413474524, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.415457 kubelet[2032]: I1002 19:27:04.415418 2032 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:27:04.415457 kubelet[2032]: I1002 19:27:04.415453 2032 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:27:04.415588 kubelet[2032]: I1002 19:27:04.415484 2032 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:27:04.418445 kubelet[2032]: I1002 19:27:04.418397 2032 policy_none.go:49] "None policy: Start" Oct 2 19:27:04.417000 audit[2049]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.419605 kubelet[2032]: E1002 19:27:04.419243 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9aa025", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.69 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413487141, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413487141, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.419871 kubelet[2032]: I1002 19:27:04.419673 2032 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:27:04.419871 kubelet[2032]: I1002 19:27:04.419712 2032 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:27:04.417000 audit[2049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffe494d2d0 a2=0 a3=1 items=0 ppid=2032 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.417000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:27:04.426539 kubelet[2032]: E1002 19:27:04.426377 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9ab3fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.69 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413492221, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413492221, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.430641 systemd[1]: Created slice kubepods.slice. Oct 2 19:27:04.444168 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:27:04.459247 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:27:04.466726 kubelet[2032]: I1002 19:27:04.466690 2032 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:27:04.468000 audit[2032]: AVC avc: denied { mac_admin } for pid=2032 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:04.471387 kubelet[2032]: E1002 19:27:04.470812 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9a6edc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.69 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413474524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 468139253, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9a6edc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.471623 kubelet[2032]: I1002 19:27:04.468360 2032 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.69" Oct 2 19:27:04.468000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:04.468000 audit[2032]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40010b4d80 a1=4000f87c98 a2=40010b4d50 a3=25 items=0 ppid=1 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.473745 kubelet[2032]: E1002 19:27:04.472891 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9aa025", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.69 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413487141, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 468148125, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9aa025" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.473915 kubelet[2032]: E1002 19:27:04.473898 2032 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.26.69" Oct 2 19:27:04.468000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:04.474340 kubelet[2032]: I1002 19:27:04.474315 2032 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:27:04.475069 kubelet[2032]: I1002 19:27:04.475026 2032 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:27:04.476193 kubelet[2032]: E1002 19:27:04.474619 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9ab3fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.69 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413492221, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 468154807, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9ab3fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.476792 kubelet[2032]: E1002 19:27:04.476720 2032 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.26.69\" not found" Oct 2 19:27:04.477654 kubelet[2032]: E1002 19:27:04.477473 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f732001047", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 470466631, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 470466631, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.446000 audit[2051]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.446000 audit[2051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd720e070 a2=0 a3=1 items=0 ppid=2032 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:27:04.493000 audit[2057]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.493000 audit[2057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd0426120 a2=0 a3=1 items=0 ppid=2032 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:27:04.554000 audit[2062]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.554000 audit[2062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffcddaf4b0 a2=0 a3=1 items=0 ppid=2032 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:27:04.558000 audit[2063]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.558000 audit[2063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd4771a00 a2=0 a3=1 items=0 ppid=2032 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.558000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:27:04.572000 audit[2066]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.572000 audit[2066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe2937d60 a2=0 a3=1 items=0 ppid=2032 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:27:04.577692 kubelet[2032]: E1002 19:27:04.575257 2032 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.26.69" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:04.589000 audit[2069]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.589000 audit[2069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc28dd010 a2=0 a3=1 items=0 ppid=2032 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:27:04.593000 audit[2070]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.593000 audit[2070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff39f1670 a2=0 a3=1 items=0 ppid=2032 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:27:04.597000 audit[2071]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.597000 audit[2071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc134f90 a2=0 a3=1 items=0 ppid=2032 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.597000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:27:04.604000 audit[2073]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.604000 audit[2073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffccc07dd0 a2=0 a3=1 items=0 ppid=2032 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:27:04.612000 audit[2075]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.612000 audit[2075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc45a4140 a2=0 a3=1 items=0 ppid=2032 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:27:04.648000 audit[2078]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.648000 audit[2078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffd4fb1d90 a2=0 a3=1 items=0 ppid=2032 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.648000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:27:04.656000 audit[2080]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.656000 audit[2080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffe47f6730 a2=0 a3=1 items=0 ppid=2032 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:27:04.675780 kubelet[2032]: I1002 19:27:04.675688 2032 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.69" Oct 2 19:27:04.677380 kubelet[2032]: E1002 19:27:04.677330 2032 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.26.69" Oct 2 19:27:04.677541 kubelet[2032]: E1002 19:27:04.677412 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9a6edc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.69 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413474524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 675629090, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9a6edc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.676000 audit[2083]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.676000 audit[2083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffc94f4ba0 a2=0 a3=1 items=0 ppid=2032 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.676000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:27:04.678742 kubelet[2032]: E1002 19:27:04.678620 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9aa025", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.69 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413487141, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 675639527, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9aa025" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.678964 kubelet[2032]: I1002 19:27:04.678721 2032 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:27:04.681000 audit[2084]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.681000 audit[2084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffff55eb10 a2=0 a3=1 items=0 ppid=2032 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:27:04.682000 audit[2085]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.682000 audit[2085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd63c1ee0 a2=0 a3=1 items=0 ppid=2032 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:27:04.686000 audit[2087]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.686000 audit[2087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe913b840 a2=0 a3=1 items=0 ppid=2032 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:27:04.687000 audit[2086]: NETFILTER_CFG table=nat:20 family=10 entries=2 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.687000 audit[2086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc2b26f20 a2=0 a3=1 items=0 ppid=2032 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.687000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:27:04.690000 audit[2088]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=2088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.690000 audit[2088]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd79a1f80 a2=0 a3=1 items=0 ppid=2032 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:27:04.696000 audit[2090]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.696000 audit[2090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc2c75130 a2=0 a3=1 items=0 ppid=2032 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:27:04.700000 audit[2091]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.700000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffcbac2200 a2=0 a3=1 items=0 ppid=2032 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.700000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:27:04.709000 audit[2093]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.709000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffdf84ff00 a2=0 a3=1 items=0 ppid=2032 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:27:04.713000 audit[2094]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.713000 audit[2094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff64b7d60 a2=0 a3=1 items=0 ppid=2032 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.713000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:27:04.717000 audit[2095]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.717000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffead62310 a2=0 a3=1 items=0 ppid=2032 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:27:04.725000 audit[2097]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.725000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffda1251a0 a2=0 a3=1 items=0 ppid=2032 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:27:04.733000 audit[2099]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.733000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffd363920 a2=0 a3=1 items=0 ppid=2032 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.733000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:27:04.740432 kubelet[2032]: E1002 19:27:04.740316 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9ab3fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.69 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413492221, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 675645487, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9ab3fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.743000 audit[2101]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.743000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffff60a7a00 a2=0 a3=1 items=0 ppid=2032 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:27:04.751000 audit[2103]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.751000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffcad7d3c0 a2=0 a3=1 items=0 ppid=2032 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:27:04.761000 audit[2105]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.761000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffc4be7fe0 a2=0 a3=1 items=0 ppid=2032 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:27:04.763554 kubelet[2032]: I1002 19:27:04.763522 2032 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:27:04.763717 kubelet[2032]: I1002 19:27:04.763695 2032 status_manager.go:176] "Starting to sync pod status with apiserver" Oct 2 19:27:04.763846 kubelet[2032]: I1002 19:27:04.763824 2032 kubelet.go:2113] "Starting kubelet main sync loop" Oct 2 19:27:04.764023 kubelet[2032]: E1002 19:27:04.764003 2032 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:27:04.766210 kubelet[2032]: W1002 19:27:04.766175 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:04.766443 kubelet[2032]: E1002 19:27:04.766421 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:04.765000 audit[2106]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.765000 audit[2106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe4fb4d60 a2=0 a3=1 items=0 ppid=2032 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:27:04.769000 audit[2107]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.769000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd14090b0 a2=0 a3=1 items=0 ppid=2032 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:27:04.773000 audit[2108]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.773000 audit[2108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8cc09a0 a2=0 a3=1 items=0 ppid=2032 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:27:04.977874 kubelet[2032]: E1002 19:27:04.977820 2032 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.26.69" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:05.079407 kubelet[2032]: I1002 19:27:05.079375 2032 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.69" Oct 2 19:27:05.081351 kubelet[2032]: E1002 19:27:05.081300 2032 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.26.69" Oct 2 19:27:05.081804 kubelet[2032]: E1002 19:27:05.081692 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9a6edc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.69 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413474524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 5, 79327095, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9a6edc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:05.140630 kubelet[2032]: E1002 19:27:05.140520 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9aa025", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.69 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413487141, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 5, 79334471, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9aa025" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:05.320331 kubelet[2032]: W1002 19:27:05.320221 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:05.320690 kubelet[2032]: E1002 19:27:05.320512 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:05.339786 kubelet[2032]: E1002 19:27:05.339698 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:05.340716 kubelet[2032]: E1002 19:27:05.340606 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9ab3fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.69 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413492221, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 5, 79339513, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9ab3fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:05.530480 kubelet[2032]: W1002 19:27:05.530444 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:05.530684 kubelet[2032]: E1002 19:27:05.530664 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:05.639147 kubelet[2032]: W1002 19:27:05.639113 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.26.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:05.639715 kubelet[2032]: E1002 19:27:05.639691 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.26.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:05.779323 kubelet[2032]: E1002 19:27:05.779288 2032 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.26.69" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:05.882446 kubelet[2032]: I1002 19:27:05.882401 2032 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.69" Oct 2 19:27:05.884274 kubelet[2032]: E1002 19:27:05.884228 2032 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.26.69" Oct 2 19:27:05.884595 kubelet[2032]: E1002 19:27:05.884460 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9a6edc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.69 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413474524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 5, 882328345, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9a6edc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:05.886381 kubelet[2032]: E1002 19:27:05.886259 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9aa025", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.69 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413487141, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 5, 882359139, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9aa025" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:05.941852 kubelet[2032]: E1002 19:27:05.941643 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9ab3fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.69 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413492221, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 5, 882364951, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9ab3fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:06.066840 kubelet[2032]: W1002 19:27:06.066794 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:06.066840 kubelet[2032]: E1002 19:27:06.066841 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:06.340236 kubelet[2032]: E1002 19:27:06.340105 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:07.051944 kubelet[2032]: W1002 19:27:07.051885 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:07.051944 kubelet[2032]: E1002 19:27:07.051938 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:07.340460 kubelet[2032]: E1002 19:27:07.340336 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:07.381999 kubelet[2032]: E1002 19:27:07.381942 2032 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.26.69" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:07.485882 kubelet[2032]: I1002 19:27:07.485830 2032 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.69" Oct 2 19:27:07.487496 kubelet[2032]: E1002 19:27:07.487369 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9a6edc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.69 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413474524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 7, 485782516, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9a6edc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:07.488147 kubelet[2032]: E1002 19:27:07.488111 2032 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.26.69" Oct 2 19:27:07.488778 kubelet[2032]: E1002 19:27:07.488678 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9aa025", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.69 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413487141, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 7, 485794662, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9aa025" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:07.489976 kubelet[2032]: E1002 19:27:07.489865 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9ab3fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.69 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413492221, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 7, 485799388, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9ab3fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:07.881873 kubelet[2032]: W1002 19:27:07.881830 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:07.881873 kubelet[2032]: E1002 19:27:07.881881 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:08.341647 kubelet[2032]: E1002 19:27:08.341314 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:08.755988 kubelet[2032]: W1002 19:27:08.755952 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.26.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:08.756248 kubelet[2032]: E1002 19:27:08.756226 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.26.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:09.131784 kubelet[2032]: W1002 19:27:09.131732 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:09.131784 kubelet[2032]: E1002 19:27:09.131786 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:09.341621 kubelet[2032]: E1002 19:27:09.341571 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:10.341758 kubelet[2032]: E1002 19:27:10.341689 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:10.413363 kubelet[2032]: W1002 19:27:10.413287 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:10.413363 kubelet[2032]: E1002 19:27:10.413335 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:10.584071 kubelet[2032]: E1002 19:27:10.584007 2032 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.26.69" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:10.690166 kubelet[2032]: I1002 19:27:10.689448 2032 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.69" Oct 2 19:27:10.691250 kubelet[2032]: E1002 19:27:10.691210 2032 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.26.69" Oct 2 19:27:10.691358 kubelet[2032]: E1002 19:27:10.691188 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9a6edc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.69 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413474524, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 10, 689400890, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9a6edc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:10.692908 kubelet[2032]: E1002 19:27:10.692519 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9aa025", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.69 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413487141, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 10, 689408845, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9aa025" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:10.693787 kubelet[2032]: E1002 19:27:10.693685 2032 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.69.178a60f72e9ab3fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.69", UID:"172.31.26.69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.69 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.69"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 413492221, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 10, 689413965, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.26.69.178a60f72e9ab3fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:11.342600 kubelet[2032]: E1002 19:27:11.342540 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:12.343185 kubelet[2032]: E1002 19:27:12.343124 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:12.862645 kubelet[2032]: W1002 19:27:12.862610 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.26.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:12.862895 kubelet[2032]: E1002 19:27:12.862874 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.26.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:12.944178 kubelet[2032]: W1002 19:27:12.944148 2032 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:12.944360 kubelet[2032]: E1002 19:27:12.944339 2032 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:13.343388 kubelet[2032]: E1002 19:27:13.343330 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:14.318282 kubelet[2032]: I1002 19:27:14.318222 2032 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:27:14.344133 kubelet[2032]: E1002 19:27:14.344091 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:14.476885 kubelet[2032]: E1002 19:27:14.476853 2032 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.26.69\" not found" Oct 2 19:27:14.719417 kubelet[2032]: E1002 19:27:14.719375 2032 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.26.69" not found Oct 2 19:27:15.344550 kubelet[2032]: E1002 19:27:15.344514 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:15.967514 kubelet[2032]: E1002 19:27:15.967479 2032 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.26.69" not found Oct 2 19:27:16.346003 kubelet[2032]: E1002 19:27:16.345729 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:16.990092 kubelet[2032]: E1002 19:27:16.990026 2032 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.26.69\" not found" node="172.31.26.69" Oct 2 19:27:17.092464 kubelet[2032]: I1002 19:27:17.092436 2032 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.69" Oct 2 19:27:17.346480 kubelet[2032]: E1002 19:27:17.346167 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:17.370593 kubelet[2032]: I1002 19:27:17.370557 2032 kubelet_node_status.go:73] "Successfully registered node" node="172.31.26.69" Oct 2 19:27:17.383827 kubelet[2032]: E1002 19:27:17.383792 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:17.484933 kubelet[2032]: E1002 19:27:17.484905 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:17.585388 kubelet[2032]: E1002 19:27:17.585359 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:17.686165 kubelet[2032]: E1002 19:27:17.686135 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:17.786451 kubelet[2032]: E1002 19:27:17.786357 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:17.873485 sudo[1828]: pam_unix(sudo:session): session closed for user root Oct 2 19:27:17.872000 audit[1828]: USER_END pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.876596 kernel: kauditd_printk_skb: 540 callbacks suppressed Oct 2 19:27:17.876654 kernel: audit: type=1106 audit(1696274837.872:625): pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.872000 audit[1828]: CRED_DISP pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.887345 kubelet[2032]: E1002 19:27:17.887300 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:17.894021 kernel: audit: type=1104 audit(1696274837.872:626): pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.899000 audit[1825]: USER_END pid=1825 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:17.899022 sshd[1825]: pam_unix(sshd:session): session closed for user core Oct 2 19:27:17.899000 audit[1825]: CRED_DISP pid=1825 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:17.915607 systemd[1]: sshd@6-172.31.26.69:22-139.178.89.65:36688.service: Deactivated successfully. Oct 2 19:27:17.916791 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:27:17.924022 kernel: audit: type=1106 audit(1696274837.899:627): pid=1825 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:17.924182 kernel: audit: type=1104 audit(1696274837.899:628): pid=1825 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:17.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.26.69:22-139.178.89.65:36688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.924453 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:27:17.933851 kernel: audit: type=1131 audit(1696274837.913:629): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.26.69:22-139.178.89.65:36688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.934964 systemd-logind[1560]: Removed session 7. Oct 2 19:27:17.988537 kubelet[2032]: E1002 19:27:17.987653 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:18.088499 kubelet[2032]: E1002 19:27:18.088456 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:18.189242 kubelet[2032]: E1002 19:27:18.189189 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:18.290068 kubelet[2032]: E1002 19:27:18.289950 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:18.346586 kubelet[2032]: E1002 19:27:18.346549 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:18.391158 kubelet[2032]: E1002 19:27:18.391113 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:18.492033 kubelet[2032]: E1002 19:27:18.492007 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:18.593196 kubelet[2032]: E1002 19:27:18.593089 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:18.693952 kubelet[2032]: E1002 19:27:18.693893 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:18.794554 kubelet[2032]: E1002 19:27:18.794505 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:18.895646 kubelet[2032]: E1002 19:27:18.895615 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:18.996211 kubelet[2032]: E1002 19:27:18.996181 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:19.097114 kubelet[2032]: E1002 19:27:19.097044 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:19.197900 kubelet[2032]: E1002 19:27:19.197795 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:19.298732 kubelet[2032]: E1002 19:27:19.298683 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:19.347390 kubelet[2032]: E1002 19:27:19.347365 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:19.399253 kubelet[2032]: E1002 19:27:19.399209 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:19.500213 kubelet[2032]: E1002 19:27:19.500091 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:19.600827 kubelet[2032]: E1002 19:27:19.600773 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:19.669906 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:27:19.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:19.679124 kernel: audit: type=1131 audit(1696274839.669:630): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:19.699000 audit: BPF prog-id=62 op=UNLOAD Oct 2 19:27:19.701835 kubelet[2032]: E1002 19:27:19.701790 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:19.699000 audit: BPF prog-id=61 op=UNLOAD Oct 2 19:27:19.705464 kernel: audit: type=1334 audit(1696274839.699:631): prog-id=62 op=UNLOAD Oct 2 19:27:19.705536 kernel: audit: type=1334 audit(1696274839.699:632): prog-id=61 op=UNLOAD Oct 2 19:27:19.705587 kernel: audit: type=1334 audit(1696274839.699:633): prog-id=60 op=UNLOAD Oct 2 19:27:19.699000 audit: BPF prog-id=60 op=UNLOAD Oct 2 19:27:19.802670 kubelet[2032]: E1002 19:27:19.802105 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:19.902926 kubelet[2032]: E1002 19:27:19.902890 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:20.003782 kubelet[2032]: E1002 19:27:20.003732 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:20.104666 kubelet[2032]: E1002 19:27:20.104390 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:20.205147 kubelet[2032]: E1002 19:27:20.205100 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:20.305810 kubelet[2032]: E1002 19:27:20.305781 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:20.348487 kubelet[2032]: E1002 19:27:20.348446 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:20.406065 kubelet[2032]: E1002 19:27:20.406005 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:20.506737 kubelet[2032]: E1002 19:27:20.506710 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:20.607530 kubelet[2032]: E1002 19:27:20.607505 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:20.708487 kubelet[2032]: E1002 19:27:20.708223 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:20.809018 kubelet[2032]: E1002 19:27:20.808969 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.69\" not found" Oct 2 19:27:20.910441 kubelet[2032]: I1002 19:27:20.910389 2032 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:27:20.911174 env[1571]: time="2023-10-02T19:27:20.911035711Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:27:20.912233 kubelet[2032]: I1002 19:27:20.912197 2032 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:27:21.347175 kubelet[2032]: I1002 19:27:21.347142 2032 apiserver.go:52] "Watching apiserver" Oct 2 19:27:21.348693 kubelet[2032]: E1002 19:27:21.348669 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:21.351738 kubelet[2032]: I1002 19:27:21.351695 2032 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:27:21.351916 kubelet[2032]: I1002 19:27:21.351853 2032 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:27:21.362920 systemd[1]: Created slice kubepods-besteffort-pod6a7624a5_3e1b_42f0_80e0_340180c1ef6c.slice. Oct 2 19:27:21.375600 kubelet[2032]: I1002 19:27:21.375564 2032 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:27:21.379608 systemd[1]: Created slice kubepods-burstable-podf2c34d84_deae_4d38_9f07_7825918c3d74.slice. Oct 2 19:27:21.469249 kubelet[2032]: I1002 19:27:21.469216 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-bpf-maps\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.469501 kubelet[2032]: I1002 19:27:21.469478 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-hostproc\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.469657 kubelet[2032]: I1002 19:27:21.469636 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-cgroup\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.469846 kubelet[2032]: I1002 19:27:21.469825 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-host-proc-sys-net\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.470006 kubelet[2032]: I1002 19:27:21.469985 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcnfr\" (UniqueName: \"kubernetes.io/projected/f2c34d84-deae-4d38-9f07-7825918c3d74-kube-api-access-vcnfr\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.470185 kubelet[2032]: I1002 19:27:21.470164 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a7624a5-3e1b-42f0-80e0-340180c1ef6c-xtables-lock\") pod \"kube-proxy-8mqhn\" (UID: \"6a7624a5-3e1b-42f0-80e0-340180c1ef6c\") " pod="kube-system/kube-proxy-8mqhn" Oct 2 19:27:21.470338 kubelet[2032]: I1002 19:27:21.470318 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28jdq\" (UniqueName: \"kubernetes.io/projected/6a7624a5-3e1b-42f0-80e0-340180c1ef6c-kube-api-access-28jdq\") pod \"kube-proxy-8mqhn\" (UID: \"6a7624a5-3e1b-42f0-80e0-340180c1ef6c\") " pod="kube-system/kube-proxy-8mqhn" Oct 2 19:27:21.470526 kubelet[2032]: I1002 19:27:21.470505 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cni-path\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.470676 kubelet[2032]: I1002 19:27:21.470655 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-lib-modules\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.470824 kubelet[2032]: I1002 19:27:21.470803 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-xtables-lock\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.470984 kubelet[2032]: I1002 19:27:21.470962 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2c34d84-deae-4d38-9f07-7825918c3d74-clustermesh-secrets\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.471177 kubelet[2032]: I1002 19:27:21.471156 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-config-path\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.471361 kubelet[2032]: I1002 19:27:21.471340 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2c34d84-deae-4d38-9f07-7825918c3d74-hubble-tls\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.471539 kubelet[2032]: I1002 19:27:21.471519 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-run\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.471699 kubelet[2032]: I1002 19:27:21.471679 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-etc-cni-netd\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.471871 kubelet[2032]: I1002 19:27:21.471846 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a7624a5-3e1b-42f0-80e0-340180c1ef6c-lib-modules\") pod \"kube-proxy-8mqhn\" (UID: \"6a7624a5-3e1b-42f0-80e0-340180c1ef6c\") " pod="kube-system/kube-proxy-8mqhn" Oct 2 19:27:21.472037 kubelet[2032]: I1002 19:27:21.472016 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-host-proc-sys-kernel\") pod \"cilium-rhdkw\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " pod="kube-system/cilium-rhdkw" Oct 2 19:27:21.472240 kubelet[2032]: I1002 19:27:21.472219 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a7624a5-3e1b-42f0-80e0-340180c1ef6c-kube-proxy\") pod \"kube-proxy-8mqhn\" (UID: \"6a7624a5-3e1b-42f0-80e0-340180c1ef6c\") " pod="kube-system/kube-proxy-8mqhn" Oct 2 19:27:21.472385 kubelet[2032]: I1002 19:27:21.472363 2032 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:27:21.689501 env[1571]: time="2023-10-02T19:27:21.689417387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhdkw,Uid:f2c34d84-deae-4d38-9f07-7825918c3d74,Namespace:kube-system,Attempt:0,}" Oct 2 19:27:21.976632 env[1571]: time="2023-10-02T19:27:21.976463675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mqhn,Uid:6a7624a5-3e1b-42f0-80e0-340180c1ef6c,Namespace:kube-system,Attempt:0,}" Oct 2 19:27:22.334920 env[1571]: time="2023-10-02T19:27:22.334754441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.341471 env[1571]: time="2023-10-02T19:27:22.341400262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.345192 env[1571]: time="2023-10-02T19:27:22.345127854Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.349112 env[1571]: time="2023-10-02T19:27:22.347809456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.349471 kubelet[2032]: E1002 19:27:22.349408 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:22.355198 env[1571]: time="2023-10-02T19:27:22.355148525Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.356935 env[1571]: time="2023-10-02T19:27:22.356892159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.360610 env[1571]: time="2023-10-02T19:27:22.360559959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.362243 env[1571]: time="2023-10-02T19:27:22.362203110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.410095 env[1571]: time="2023-10-02T19:27:22.409931018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:27:22.410290 env[1571]: time="2023-10-02T19:27:22.410014872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:27:22.410290 env[1571]: time="2023-10-02T19:27:22.410042943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:27:22.411648 env[1571]: time="2023-10-02T19:27:22.410793644Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c pid=2130 runtime=io.containerd.runc.v2 Oct 2 19:27:22.419403 env[1571]: time="2023-10-02T19:27:22.419211782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:27:22.419403 env[1571]: time="2023-10-02T19:27:22.419299790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:27:22.419403 env[1571]: time="2023-10-02T19:27:22.419329146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:27:22.420211 env[1571]: time="2023-10-02T19:27:22.420043983Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ad5ed6a0ec950788d5d31165a17af4f33f664d10ef8a7b5209a7a14f318ddfa pid=2131 runtime=io.containerd.runc.v2 Oct 2 19:27:22.453618 systemd[1]: Started cri-containerd-742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c.scope. Oct 2 19:27:22.466357 systemd[1]: Started cri-containerd-9ad5ed6a0ec950788d5d31165a17af4f33f664d10ef8a7b5209a7a14f318ddfa.scope. Oct 2 19:27:22.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.512000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.512000 audit: BPF prog-id=70 op=LOAD Oct 2 19:27:22.512000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.512000 audit[2148]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000115b38 a2=10 a3=0 items=0 ppid=2130 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.515258 kernel: audit: type=1400 audit(1696274842.504:634): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323539366333333738303962303339303462656261343338396264 Oct 2 19:27:22.512000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.512000 audit[2148]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001155a0 a2=3c a3=0 items=0 ppid=2130 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323539366333333738303962303339303462656261343338396264 Oct 2 19:27:22.513000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.513000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.513000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.513000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.513000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.513000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.513000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.513000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.513000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.513000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.513000 audit: BPF prog-id=71 op=LOAD Oct 2 19:27:22.513000 audit[2148]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001158e0 a2=78 a3=0 items=0 ppid=2130 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323539366333333738303962303339303462656261343338396264 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit: BPF prog-id=72 op=LOAD Oct 2 19:27:22.515000 audit[2148]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000115670 a2=78 a3=0 items=0 ppid=2130 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323539366333333738303962303339303462656261343338396264 Oct 2 19:27:22.515000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:27:22.515000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { perfmon } for pid=2148 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit[2148]: AVC avc: denied { bpf } for pid=2148 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.515000 audit: BPF prog-id=73 op=LOAD Oct 2 19:27:22.515000 audit[2148]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000115b40 a2=78 a3=0 items=0 ppid=2130 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323539366333333738303962303339303462656261343338396264 Oct 2 19:27:22.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.539000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.539000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.539000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.539000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.539000 audit: BPF prog-id=74 op=LOAD Oct 2 19:27:22.541000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.541000 audit[2154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2131 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961643565643661306563393530373838643564333131363561313761 Oct 2 19:27:22.541000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.541000 audit[2154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2131 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961643565643661306563393530373838643564333131363561313761 Oct 2 19:27:22.542000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.542000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.542000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.542000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.542000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.542000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.542000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.542000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.542000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.542000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.542000 audit: BPF prog-id=75 op=LOAD Oct 2 19:27:22.542000 audit[2154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2131 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961643565643661306563393530373838643564333131363561313761 Oct 2 19:27:22.543000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.543000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.543000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.543000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.543000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.543000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.543000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.543000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.543000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.543000 audit: BPF prog-id=76 op=LOAD Oct 2 19:27:22.543000 audit[2154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2131 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961643565643661306563393530373838643564333131363561313761 Oct 2 19:27:22.544000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:27:22.544000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:27:22.545000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.545000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.545000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.545000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.545000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.545000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.545000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.545000 audit[2154]: AVC avc: denied { perfmon } for pid=2154 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.545000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.545000 audit[2154]: AVC avc: denied { bpf } for pid=2154 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.545000 audit: BPF prog-id=77 op=LOAD Oct 2 19:27:22.545000 audit[2154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2131 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961643565643661306563393530373838643564333131363561313761 Oct 2 19:27:22.559295 env[1571]: time="2023-10-02T19:27:22.559236487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhdkw,Uid:f2c34d84-deae-4d38-9f07-7825918c3d74,Namespace:kube-system,Attempt:0,} returns sandbox id \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\"" Oct 2 19:27:22.574119 env[1571]: time="2023-10-02T19:27:22.574017725Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:27:22.596196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995815810.mount: Deactivated successfully. Oct 2 19:27:22.609800 env[1571]: time="2023-10-02T19:27:22.609738075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mqhn,Uid:6a7624a5-3e1b-42f0-80e0-340180c1ef6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ad5ed6a0ec950788d5d31165a17af4f33f664d10ef8a7b5209a7a14f318ddfa\"" Oct 2 19:27:23.350410 kubelet[2032]: E1002 19:27:23.350351 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:24.333956 kubelet[2032]: E1002 19:27:24.333890 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:24.350648 kubelet[2032]: E1002 19:27:24.350580 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:25.351748 kubelet[2032]: E1002 19:27:25.351616 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:26.352194 kubelet[2032]: E1002 19:27:26.352124 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:27.353107 kubelet[2032]: E1002 19:27:27.352988 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:28.353805 kubelet[2032]: E1002 19:27:28.353753 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:29.354234 kubelet[2032]: E1002 19:27:29.354126 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:29.602812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092022798.mount: Deactivated successfully. Oct 2 19:27:30.355417 kubelet[2032]: E1002 19:27:30.355317 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:31.355614 kubelet[2032]: E1002 19:27:31.355540 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:32.356701 kubelet[2032]: E1002 19:27:32.356632 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:33.357602 kubelet[2032]: E1002 19:27:33.357544 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:33.965118 env[1571]: time="2023-10-02T19:27:33.964909064Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:33.968539 env[1571]: time="2023-10-02T19:27:33.968446939Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:33.972144 env[1571]: time="2023-10-02T19:27:33.972046826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:33.973431 env[1571]: time="2023-10-02T19:27:33.973368105Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 2 19:27:33.975938 env[1571]: time="2023-10-02T19:27:33.975881203Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\"" Oct 2 19:27:33.977929 env[1571]: time="2023-10-02T19:27:33.977866345Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:27:33.998755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439082048.mount: Deactivated successfully. Oct 2 19:27:34.009979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1763941832.mount: Deactivated successfully. Oct 2 19:27:34.020765 env[1571]: time="2023-10-02T19:27:34.020656132Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd\"" Oct 2 19:27:34.021964 env[1571]: time="2023-10-02T19:27:34.021915058Z" level=info msg="StartContainer for \"7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd\"" Oct 2 19:27:34.080441 systemd[1]: Started cri-containerd-7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd.scope. Oct 2 19:27:34.117708 systemd[1]: cri-containerd-7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd.scope: Deactivated successfully. Oct 2 19:27:34.358951 kubelet[2032]: E1002 19:27:34.358710 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:34.370349 env[1571]: time="2023-10-02T19:27:34.370255427Z" level=info msg="shim disconnected" id=7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd Oct 2 19:27:34.370806 env[1571]: time="2023-10-02T19:27:34.370767484Z" level=warning msg="cleaning up after shim disconnected" id=7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd namespace=k8s.io Oct 2 19:27:34.370940 env[1571]: time="2023-10-02T19:27:34.370911454Z" level=info msg="cleaning up dead shim" Oct 2 19:27:34.398713 env[1571]: time="2023-10-02T19:27:34.398650017Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2224 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:27:34Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:27:34.399505 env[1571]: time="2023-10-02T19:27:34.399362412Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Oct 2 19:27:34.400209 env[1571]: time="2023-10-02T19:27:34.399970929Z" level=error msg="Failed to pipe stdout of container \"7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd\"" error="reading from a closed fifo" Oct 2 19:27:34.400849 env[1571]: time="2023-10-02T19:27:34.400465244Z" level=error msg="Failed to pipe stderr of container \"7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd\"" error="reading from a closed fifo" Oct 2 19:27:34.402753 env[1571]: time="2023-10-02T19:27:34.402670920Z" level=error msg="StartContainer for \"7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:27:34.403867 kubelet[2032]: E1002 19:27:34.403325 2032 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd" Oct 2 19:27:34.403867 kubelet[2032]: E1002 19:27:34.403487 2032 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:27:34.403867 kubelet[2032]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:27:34.403867 kubelet[2032]: rm /hostbin/cilium-mount Oct 2 19:27:34.404321 kubelet[2032]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vcnfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:27:34.404481 kubelet[2032]: E1002 19:27:34.403551 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:27:34.415610 update_engine[1561]: I1002 19:27:34.415546 1561 update_attempter.cc:505] Updating boot flags... Oct 2 19:27:34.827885 env[1571]: time="2023-10-02T19:27:34.827810722Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:27:34.934299 env[1571]: time="2023-10-02T19:27:34.934216881Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\"" Oct 2 19:27:34.935687 env[1571]: time="2023-10-02T19:27:34.935623005Z" level=info msg="StartContainer for \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\"" Oct 2 19:27:34.995211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd-rootfs.mount: Deactivated successfully. Oct 2 19:27:35.094399 systemd[1]: Started cri-containerd-7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022.scope. Oct 2 19:27:35.104370 systemd[1]: run-containerd-runc-k8s.io-7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022-runc.MkNmxm.mount: Deactivated successfully. Oct 2 19:27:35.198046 systemd[1]: cri-containerd-7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022.scope: Deactivated successfully. Oct 2 19:27:35.256019 env[1571]: time="2023-10-02T19:27:35.255936771Z" level=info msg="shim disconnected" id=7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022 Oct 2 19:27:35.256019 env[1571]: time="2023-10-02T19:27:35.256015284Z" level=warning msg="cleaning up after shim disconnected" id=7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022 namespace=k8s.io Oct 2 19:27:35.256697 env[1571]: time="2023-10-02T19:27:35.256037899Z" level=info msg="cleaning up dead shim" Oct 2 19:27:35.360383 kubelet[2032]: E1002 19:27:35.359782 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:35.361266 env[1571]: time="2023-10-02T19:27:35.361183602Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2416 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:27:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:27:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:27:35.361729 env[1571]: time="2023-10-02T19:27:35.361638852Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 19:27:35.362261 env[1571]: time="2023-10-02T19:27:35.362194906Z" level=error msg="Failed to pipe stderr of container \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\"" error="reading from a closed fifo" Oct 2 19:27:35.368970 env[1571]: time="2023-10-02T19:27:35.368868174Z" level=error msg="Failed to pipe stdout of container \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\"" error="reading from a closed fifo" Oct 2 19:27:35.377080 env[1571]: time="2023-10-02T19:27:35.375479578Z" level=error msg="StartContainer for \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:27:35.378449 kubelet[2032]: E1002 19:27:35.377512 2032 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022" Oct 2 19:27:35.378449 kubelet[2032]: E1002 19:27:35.377692 2032 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:27:35.378449 kubelet[2032]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:27:35.378449 kubelet[2032]: rm /hostbin/cilium-mount Oct 2 19:27:35.378806 kubelet[2032]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vcnfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:27:35.379021 kubelet[2032]: E1002 19:27:35.377778 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:27:35.831881 kubelet[2032]: I1002 19:27:35.831097 2032 scope.go:115] "RemoveContainer" containerID="7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd" Oct 2 19:27:35.831881 kubelet[2032]: I1002 19:27:35.831618 2032 scope.go:115] "RemoveContainer" containerID="7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd" Oct 2 19:27:35.834205 env[1571]: time="2023-10-02T19:27:35.834138908Z" level=info msg="RemoveContainer for \"7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd\"" Oct 2 19:27:35.838304 env[1571]: time="2023-10-02T19:27:35.838223721Z" level=info msg="RemoveContainer for \"7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd\" returns successfully" Oct 2 19:27:35.838710 env[1571]: time="2023-10-02T19:27:35.838588250Z" level=info msg="RemoveContainer for \"7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd\"" Oct 2 19:27:35.838710 env[1571]: time="2023-10-02T19:27:35.838667568Z" level=info msg="RemoveContainer for \"7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd\" returns successfully" Oct 2 19:27:35.839866 kubelet[2032]: E1002 19:27:35.839419 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:27:35.993906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022-rootfs.mount: Deactivated successfully. Oct 2 19:27:36.102906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1045896094.mount: Deactivated successfully. Oct 2 19:27:36.361258 kubelet[2032]: E1002 19:27:36.360776 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:36.771295 env[1571]: time="2023-10-02T19:27:36.771114301Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:36.774342 env[1571]: time="2023-10-02T19:27:36.774290589Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0393a046c6ac3c39d56f9b536c02216184f07904e0db26449490d0cb1d1fe343,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:36.778280 env[1571]: time="2023-10-02T19:27:36.778227134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:36.782264 env[1571]: time="2023-10-02T19:27:36.782216961Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d8c8e3e8fe630c3f2d84a22722d4891343196483ac4cc02c1ba9345b1bfc8a3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:36.783806 env[1571]: time="2023-10-02T19:27:36.783745888Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\" returns image reference \"sha256:0393a046c6ac3c39d56f9b536c02216184f07904e0db26449490d0cb1d1fe343\"" Oct 2 19:27:36.788182 env[1571]: time="2023-10-02T19:27:36.788131579Z" level=info msg="CreateContainer within sandbox \"9ad5ed6a0ec950788d5d31165a17af4f33f664d10ef8a7b5209a7a14f318ddfa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:27:36.807658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382418637.mount: Deactivated successfully. Oct 2 19:27:36.824775 env[1571]: time="2023-10-02T19:27:36.824684040Z" level=info msg="CreateContainer within sandbox \"9ad5ed6a0ec950788d5d31165a17af4f33f664d10ef8a7b5209a7a14f318ddfa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2bacd48513f450b7c723f0860afd0e05d39b7f13176ef4cad3028b9b7353a3ae\"" Oct 2 19:27:36.825518 env[1571]: time="2023-10-02T19:27:36.825471075Z" level=info msg="StartContainer for \"2bacd48513f450b7c723f0860afd0e05d39b7f13176ef4cad3028b9b7353a3ae\"" Oct 2 19:27:36.837682 kubelet[2032]: E1002 19:27:36.837619 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:27:36.873526 systemd[1]: Started cri-containerd-2bacd48513f450b7c723f0860afd0e05d39b7f13176ef4cad3028b9b7353a3ae.scope. Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.927399 kernel: kauditd_printk_skb: 113 callbacks suppressed Oct 2 19:27:36.927530 kernel: audit: type=1400 audit(1696274856.923:670): avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2131 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:36.946775 kernel: audit: type=1300 audit(1696274856.923:670): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2131 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:36.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262616364343835313366343530623763373233663038363061666430 Oct 2 19:27:36.957612 kernel: audit: type=1327 audit(1696274856.923:670): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262616364343835313366343530623763373233663038363061666430 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.965737 kernel: audit: type=1400 audit(1696274856.923:671): avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.973495 kernel: audit: type=1400 audit(1696274856.923:671): avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.983416 kernel: audit: type=1400 audit(1696274856.923:671): avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.995804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771064066.mount: Deactivated successfully. Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.007146 kernel: audit: type=1400 audit(1696274856.923:671): avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.015175 kernel: audit: type=1400 audit(1696274856.923:671): avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.015260 kernel: audit: type=1400 audit(1696274856.923:671): avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.015313 kernel: audit: type=1400 audit(1696274856.923:671): avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit: BPF prog-id=78 op=LOAD Oct 2 19:27:36.923000 audit[2552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2131 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:36.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262616364343835313366343530623763373233663038363061666430 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.923000 audit: BPF prog-id=79 op=LOAD Oct 2 19:27:36.923000 audit[2552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2131 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:36.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262616364343835313366343530623763373233663038363061666430 Oct 2 19:27:36.924000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:27:36.924000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:27:36.924000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.924000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.924000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.924000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.924000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.924000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.924000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.924000 audit[2552]: AVC avc: denied { perfmon } for pid=2552 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.924000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.924000 audit[2552]: AVC avc: denied { bpf } for pid=2552 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:36.924000 audit: BPF prog-id=80 op=LOAD Oct 2 19:27:36.924000 audit[2552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2131 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:36.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262616364343835313366343530623763373233663038363061666430 Oct 2 19:27:37.033792 env[1571]: time="2023-10-02T19:27:37.033707651Z" level=info msg="StartContainer for \"2bacd48513f450b7c723f0860afd0e05d39b7f13176ef4cad3028b9b7353a3ae\" returns successfully" Oct 2 19:27:37.161000 audit[2602]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.161000 audit[2602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe3372ef0 a2=0 a3=ffff92a416c0 items=0 ppid=2563 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:27:37.165000 audit[2603]: NETFILTER_CFG table=nat:36 family=10 entries=1 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.165000 audit[2603]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7249310 a2=0 a3=ffffbc9456c0 items=0 ppid=2563 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:27:37.169000 audit[2604]: NETFILTER_CFG table=mangle:37 family=2 entries=1 op=nft_register_chain pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.169000 audit[2604]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd44cb310 a2=0 a3=ffff9d0f26c0 items=0 ppid=2563 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:27:37.169000 audit[2605]: NETFILTER_CFG table=filter:38 family=10 entries=1 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.169000 audit[2605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea6885f0 a2=0 a3=ffffbde556c0 items=0 ppid=2563 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:27:37.173000 audit[2606]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.173000 audit[2606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc26670e0 a2=0 a3=ffffb6efd6c0 items=0 ppid=2563 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:27:37.177000 audit[2607]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.177000 audit[2607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff00e0b90 a2=0 a3=ffff8bc196c0 items=0 ppid=2563 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:27:37.266000 audit[2608]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.266000 audit[2608]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffddea7a80 a2=0 a3=ffff9393a6c0 items=0 ppid=2563 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:27:37.276000 audit[2610]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.276000 audit[2610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe85782d0 a2=0 a3=ffffa96ef6c0 items=0 ppid=2563 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:27:37.288000 audit[2613]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.288000 audit[2613]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdf547360 a2=0 a3=ffffa33766c0 items=0 ppid=2563 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.288000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:27:37.292000 audit[2614]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.292000 audit[2614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd81df20 a2=0 a3=ffffafefb6c0 items=0 ppid=2563 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:27:37.301000 audit[2616]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.301000 audit[2616]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd230ed80 a2=0 a3=ffffb37626c0 items=0 ppid=2563 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:27:37.305000 audit[2617]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.305000 audit[2617]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea27f580 a2=0 a3=ffff8a3546c0 items=0 ppid=2563 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.305000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:27:37.313000 audit[2619]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.313000 audit[2619]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff8b80900 a2=0 a3=ffff8e3186c0 items=0 ppid=2563 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.313000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:27:37.326000 audit[2622]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2622 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.326000 audit[2622]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd575cf00 a2=0 a3=ffff817396c0 items=0 ppid=2563 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:27:37.331000 audit[2623]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.331000 audit[2623]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbf80c40 a2=0 a3=ffffa17b86c0 items=0 ppid=2563 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:27:37.339000 audit[2625]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.339000 audit[2625]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc9b12120 a2=0 a3=ffffa076b6c0 items=0 ppid=2563 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:27:37.344000 audit[2626]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.344000 audit[2626]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd7894f50 a2=0 a3=ffff8137f6c0 items=0 ppid=2563 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:27:37.353000 audit[2628]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.353000 audit[2628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcecea2c0 a2=0 a3=ffffbdbec6c0 items=0 ppid=2563 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:27:37.361118 kubelet[2032]: E1002 19:27:37.361013 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:37.365000 audit[2631]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.365000 audit[2631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffd200ea0 a2=0 a3=ffff982f56c0 items=0 ppid=2563 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.365000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:27:37.377000 audit[2634]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.377000 audit[2634]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe441c680 a2=0 a3=ffff9ed106c0 items=0 ppid=2563 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.377000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:27:37.381000 audit[2635]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.381000 audit[2635]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd588f9c0 a2=0 a3=ffff9ae726c0 items=0 ppid=2563 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:27:37.391000 audit[2637]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2637 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.391000 audit[2637]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc54e3710 a2=0 a3=ffff90bce6c0 items=0 ppid=2563 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.391000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:37.402000 audit[2640]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:37.402000 audit[2640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffee9b42b0 a2=0 a3=ffffb95b96c0 items=0 ppid=2563 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:37.430000 audit[2644]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:27:37.430000 audit[2644]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffcd2db8f0 a2=0 a3=ffffba6ca6c0 items=0 ppid=2563 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.430000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:37.450000 audit[2644]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=2644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:27:37.450000 audit[2644]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffcd2db8f0 a2=0 a3=ffffba6ca6c0 items=0 ppid=2563 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.450000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:37.462000 audit[2648]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2648 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.462000 audit[2648]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc46a7bf0 a2=0 a3=ffff90d696c0 items=0 ppid=2563 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:27:37.471000 audit[2650]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2650 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.471000 audit[2650]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffefe5be0 a2=0 a3=ffff8d65e6c0 items=0 ppid=2563 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:27:37.483000 audit[2653]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2653 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.483000 audit[2653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff7d9d9d0 a2=0 a3=ffff995706c0 items=0 ppid=2563 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.483000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:27:37.489000 audit[2654]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2654 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.489000 audit[2654]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedb2fc40 a2=0 a3=ffff8d41e6c0 items=0 ppid=2563 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:27:37.498000 audit[2656]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2656 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.498000 audit[2656]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe0c8cce0 a2=0 a3=ffff872786c0 items=0 ppid=2563 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:27:37.503000 audit[2657]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2657 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.503000 audit[2657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca71caa0 a2=0 a3=ffff913c16c0 items=0 ppid=2563 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.503000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:27:37.506564 kubelet[2032]: W1002 19:27:37.501345 2032 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c34d84_deae_4d38_9f07_7825918c3d74.slice/cri-containerd-7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd.scope WatchSource:0}: container "7c5727e167f5c93a1b8c88747d99289a7b1e65d19bb3ff3bbed46f2db40a86fd" in namespace "k8s.io": not found Oct 2 19:27:37.521000 audit[2659]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.521000 audit[2659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe0e52440 a2=0 a3=ffffae9af6c0 items=0 ppid=2563 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.521000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:27:37.534000 audit[2662]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.534000 audit[2662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdc9600c0 a2=0 a3=ffffa58a66c0 items=0 ppid=2563 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:27:37.539000 audit[2663]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2663 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.539000 audit[2663]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9af1230 a2=0 a3=ffffb51ae6c0 items=0 ppid=2563 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:27:37.549000 audit[2665]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2665 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.549000 audit[2665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd8248d10 a2=0 a3=ffff88c9f6c0 items=0 ppid=2563 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:27:37.552000 audit[2666]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2666 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.552000 audit[2666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda8cf890 a2=0 a3=ffffb71236c0 items=0 ppid=2563 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.552000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:27:37.561000 audit[2668]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2668 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.561000 audit[2668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffe400d70 a2=0 a3=ffffaf3c76c0 items=0 ppid=2563 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.561000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:27:37.573000 audit[2671]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.573000 audit[2671]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffcf88ab0 a2=0 a3=ffff831c96c0 items=0 ppid=2563 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:27:37.585000 audit[2674]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.585000 audit[2674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd18b9680 a2=0 a3=ffff94f346c0 items=0 ppid=2563 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:27:37.590000 audit[2675]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2675 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.590000 audit[2675]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffca06dbd0 a2=0 a3=ffff855836c0 items=0 ppid=2563 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:27:37.598000 audit[2677]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2677 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.598000 audit[2677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffffaab4180 a2=0 a3=ffff7f9196c0 items=0 ppid=2563 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:37.609000 audit[2680]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2680 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:37.609000 audit[2680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd14afe90 a2=0 a3=ffffa59726c0 items=0 ppid=2563 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.609000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:37.630000 audit[2684]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:27:37.630000 audit[2684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd8e89c20 a2=0 a3=ffffb94386c0 items=0 ppid=2563 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.630000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:37.631000 audit[2684]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:27:37.631000 audit[2684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffd8e89c20 a2=0 a3=ffffb94386c0 items=0 ppid=2563 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.631000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:38.361371 kubelet[2032]: E1002 19:27:38.361334 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:39.362680 kubelet[2032]: E1002 19:27:39.362634 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:40.363833 kubelet[2032]: E1002 19:27:40.363787 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:40.620417 kubelet[2032]: W1002 19:27:40.620373 2032 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c34d84_deae_4d38_9f07_7825918c3d74.slice/cri-containerd-7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022.scope WatchSource:0}: task 7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022 not found: not found Oct 2 19:27:40.630161 kubelet[2032]: E1002 19:27:40.630128 2032 cadvisor_stats_provider.go:442] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a7624a5_3e1b_42f0_80e0_340180c1ef6c.slice/cri-containerd-2bacd48513f450b7c723f0860afd0e05d39b7f13176ef4cad3028b9b7353a3ae.scope\": RecentStats: unable to find data in memory cache]" Oct 2 19:27:41.364771 kubelet[2032]: E1002 19:27:41.364699 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:42.365622 kubelet[2032]: E1002 19:27:42.365583 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:43.366805 kubelet[2032]: E1002 19:27:43.366759 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:44.333266 kubelet[2032]: E1002 19:27:44.333230 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:44.368024 kubelet[2032]: E1002 19:27:44.367993 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:45.369634 kubelet[2032]: E1002 19:27:45.369571 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:46.370696 kubelet[2032]: E1002 19:27:46.370650 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:47.372189 kubelet[2032]: E1002 19:27:47.372146 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:48.373631 kubelet[2032]: E1002 19:27:48.373588 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:49.374806 kubelet[2032]: E1002 19:27:49.374743 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:50.374991 kubelet[2032]: E1002 19:27:50.374925 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:50.770167 env[1571]: time="2023-10-02T19:27:50.769778676Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:27:50.791704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118049057.mount: Deactivated successfully. Oct 2 19:27:50.804069 env[1571]: time="2023-10-02T19:27:50.803956092Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\"" Oct 2 19:27:50.805308 env[1571]: time="2023-10-02T19:27:50.805260060Z" level=info msg="StartContainer for \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\"" Oct 2 19:27:50.852968 systemd[1]: Started cri-containerd-b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594.scope. Oct 2 19:27:50.892116 systemd[1]: cri-containerd-b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594.scope: Deactivated successfully. Oct 2 19:27:51.115308 env[1571]: time="2023-10-02T19:27:51.114551202Z" level=info msg="shim disconnected" id=b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594 Oct 2 19:27:51.115308 env[1571]: time="2023-10-02T19:27:51.114643159Z" level=warning msg="cleaning up after shim disconnected" id=b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594 namespace=k8s.io Oct 2 19:27:51.115308 env[1571]: time="2023-10-02T19:27:51.114670355Z" level=info msg="cleaning up dead shim" Oct 2 19:27:51.140994 env[1571]: time="2023-10-02T19:27:51.140924673Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2710 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:27:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:27:51.141783 env[1571]: time="2023-10-02T19:27:51.141694342Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:27:51.145637 env[1571]: time="2023-10-02T19:27:51.142375271Z" level=error msg="Failed to pipe stdout of container \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\"" error="reading from a closed fifo" Oct 2 19:27:51.145981 env[1571]: time="2023-10-02T19:27:51.145155516Z" level=error msg="Failed to pipe stderr of container \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\"" error="reading from a closed fifo" Oct 2 19:27:51.147510 env[1571]: time="2023-10-02T19:27:51.147440605Z" level=error msg="StartContainer for \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:27:51.147949 kubelet[2032]: E1002 19:27:51.147900 2032 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594" Oct 2 19:27:51.148254 kubelet[2032]: E1002 19:27:51.148084 2032 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:27:51.148254 kubelet[2032]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:27:51.148254 kubelet[2032]: rm /hostbin/cilium-mount Oct 2 19:27:51.148254 kubelet[2032]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vcnfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:27:51.148576 kubelet[2032]: E1002 19:27:51.148174 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:27:51.375941 kubelet[2032]: E1002 19:27:51.375876 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:51.785906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594-rootfs.mount: Deactivated successfully. Oct 2 19:27:51.887087 kubelet[2032]: I1002 19:27:51.886991 2032 scope.go:115] "RemoveContainer" containerID="7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022" Oct 2 19:27:51.887771 kubelet[2032]: I1002 19:27:51.887736 2032 scope.go:115] "RemoveContainer" containerID="7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022" Oct 2 19:27:51.891933 env[1571]: time="2023-10-02T19:27:51.891860150Z" level=info msg="RemoveContainer for \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\"" Oct 2 19:27:51.893203 env[1571]: time="2023-10-02T19:27:51.893113345Z" level=info msg="RemoveContainer for \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\"" Oct 2 19:27:51.893640 env[1571]: time="2023-10-02T19:27:51.893561597Z" level=error msg="RemoveContainer for \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\" failed" error="failed to set removing state for container \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\": container is already in removing state" Oct 2 19:27:51.894157 kubelet[2032]: E1002 19:27:51.894128 2032 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\": container is already in removing state" containerID="7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022" Oct 2 19:27:51.894385 kubelet[2032]: I1002 19:27:51.894361 2032 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022} err="rpc error: code = Unknown desc = failed to set removing state for container \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\": container is already in removing state" Oct 2 19:27:51.895714 env[1571]: time="2023-10-02T19:27:51.895647602Z" level=info msg="RemoveContainer for \"7ce49d5109758666fc7d1f43446064951efd2a2813ad76a1c966640324baa022\" returns successfully" Oct 2 19:27:51.896686 kubelet[2032]: E1002 19:27:51.896646 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:27:51.911767 kubelet[2032]: I1002 19:27:51.911729 2032 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8mqhn" podStartSLOduration=-9.223372001943106e+09 pod.CreationTimestamp="2023-10-02 19:27:17 +0000 UTC" firstStartedPulling="2023-10-02 19:27:22.612179893 +0000 UTC m=+19.241431157" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:27:37.858898551 +0000 UTC m=+34.488149851" watchObservedRunningTime="2023-10-02 19:27:51.911669573 +0000 UTC m=+48.540920837" Oct 2 19:27:52.377595 kubelet[2032]: E1002 19:27:52.377529 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:53.378127 kubelet[2032]: E1002 19:27:53.378036 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:54.219922 kubelet[2032]: W1002 19:27:54.219874 2032 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c34d84_deae_4d38_9f07_7825918c3d74.slice/cri-containerd-b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594.scope WatchSource:0}: task b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594 not found: not found Oct 2 19:27:54.378612 kubelet[2032]: E1002 19:27:54.378542 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:55.379624 kubelet[2032]: E1002 19:27:55.379548 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:56.380661 kubelet[2032]: E1002 19:27:56.380615 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:57.381926 kubelet[2032]: E1002 19:27:57.381879 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:58.383167 kubelet[2032]: E1002 19:27:58.383118 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:59.384635 kubelet[2032]: E1002 19:27:59.384574 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:00.385361 kubelet[2032]: E1002 19:28:00.385289 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:01.385942 kubelet[2032]: E1002 19:28:01.385895 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:02.386720 kubelet[2032]: E1002 19:28:02.386668 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:02.766154 kubelet[2032]: E1002 19:28:02.765969 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:28:03.388279 kubelet[2032]: E1002 19:28:03.388216 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:04.333651 kubelet[2032]: E1002 19:28:04.333583 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:04.389219 kubelet[2032]: E1002 19:28:04.389148 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:05.389314 kubelet[2032]: E1002 19:28:05.389276 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:06.390340 kubelet[2032]: E1002 19:28:06.390267 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:07.391261 kubelet[2032]: E1002 19:28:07.391199 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:08.391793 kubelet[2032]: E1002 19:28:08.391720 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:09.392100 kubelet[2032]: E1002 19:28:09.392041 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:10.393546 kubelet[2032]: E1002 19:28:10.393500 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:11.395276 kubelet[2032]: E1002 19:28:11.395229 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:12.396438 kubelet[2032]: E1002 19:28:12.396376 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:13.397270 kubelet[2032]: E1002 19:28:13.397224 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:14.398607 kubelet[2032]: E1002 19:28:14.398538 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:14.772377 env[1571]: time="2023-10-02T19:28:14.770417779Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:28:14.794508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489832829.mount: Deactivated successfully. Oct 2 19:28:14.807776 env[1571]: time="2023-10-02T19:28:14.807713262Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\"" Oct 2 19:28:14.809786 env[1571]: time="2023-10-02T19:28:14.809689099Z" level=info msg="StartContainer for \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\"" Oct 2 19:28:14.858435 systemd[1]: Started cri-containerd-124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb.scope. Oct 2 19:28:14.895606 systemd[1]: cri-containerd-124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb.scope: Deactivated successfully. Oct 2 19:28:14.918433 env[1571]: time="2023-10-02T19:28:14.918355842Z" level=info msg="shim disconnected" id=124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb Oct 2 19:28:14.918739 env[1571]: time="2023-10-02T19:28:14.918436177Z" level=warning msg="cleaning up after shim disconnected" id=124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb namespace=k8s.io Oct 2 19:28:14.918739 env[1571]: time="2023-10-02T19:28:14.918459399Z" level=info msg="cleaning up dead shim" Oct 2 19:28:14.950770 env[1571]: time="2023-10-02T19:28:14.950697699Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:28:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2753 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:28:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:28:14.951277 env[1571]: time="2023-10-02T19:28:14.951189945Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:28:14.954230 env[1571]: time="2023-10-02T19:28:14.954149661Z" level=error msg="Failed to pipe stderr of container \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\"" error="reading from a closed fifo" Oct 2 19:28:14.955390 env[1571]: time="2023-10-02T19:28:14.955327454Z" level=error msg="Failed to pipe stdout of container \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\"" error="reading from a closed fifo" Oct 2 19:28:14.957559 env[1571]: time="2023-10-02T19:28:14.957491202Z" level=error msg="StartContainer for \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:28:14.958303 kubelet[2032]: E1002 19:28:14.957954 2032 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb" Oct 2 19:28:14.958303 kubelet[2032]: E1002 19:28:14.958184 2032 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:28:14.958303 kubelet[2032]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:28:14.958303 kubelet[2032]: rm /hostbin/cilium-mount Oct 2 19:28:14.958680 kubelet[2032]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vcnfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:28:14.958793 kubelet[2032]: E1002 19:28:14.958243 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:28:15.399377 kubelet[2032]: E1002 19:28:15.399338 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:15.789011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb-rootfs.mount: Deactivated successfully. Oct 2 19:28:15.954207 kubelet[2032]: I1002 19:28:15.954168 2032 scope.go:115] "RemoveContainer" containerID="b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594" Oct 2 19:28:15.955555 kubelet[2032]: I1002 19:28:15.955516 2032 scope.go:115] "RemoveContainer" containerID="b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594" Oct 2 19:28:15.958726 env[1571]: time="2023-10-02T19:28:15.958663692Z" level=info msg="RemoveContainer for \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\"" Oct 2 19:28:15.961611 env[1571]: time="2023-10-02T19:28:15.961506575Z" level=info msg="RemoveContainer for \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\"" Oct 2 19:28:15.961826 env[1571]: time="2023-10-02T19:28:15.961720301Z" level=error msg="RemoveContainer for \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\" failed" error="rpc error: code = NotFound desc = get container info: container \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\" in namespace \"k8s.io\": not found" Oct 2 19:28:15.962650 kubelet[2032]: E1002 19:28:15.962617 2032 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\" in namespace \"k8s.io\": not found" containerID="b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594" Oct 2 19:28:15.962929 kubelet[2032]: E1002 19:28:15.962884 2032 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594" in namespace "k8s.io": not found; Skipping pod "cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)" Oct 2 19:28:15.963579 env[1571]: time="2023-10-02T19:28:15.963495107Z" level=info msg="RemoveContainer for \"b1a36ffeb424a7a3d0faa2634ca160fbe52a0ade7c1fba274bea8a427b135594\" returns successfully" Oct 2 19:28:15.964196 kubelet[2032]: E1002 19:28:15.964147 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:28:16.400668 kubelet[2032]: E1002 19:28:16.400623 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:17.401632 kubelet[2032]: E1002 19:28:17.401568 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:18.022899 kubelet[2032]: W1002 19:28:18.022855 2032 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c34d84_deae_4d38_9f07_7825918c3d74.slice/cri-containerd-124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb.scope WatchSource:0}: task 124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb not found: not found Oct 2 19:28:18.401766 kubelet[2032]: E1002 19:28:18.401721 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:19.402902 kubelet[2032]: E1002 19:28:19.402832 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:20.403956 kubelet[2032]: E1002 19:28:20.403904 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:21.405611 kubelet[2032]: E1002 19:28:21.405557 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:22.406207 kubelet[2032]: E1002 19:28:22.406163 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:23.407937 kubelet[2032]: E1002 19:28:23.407884 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:24.334242 kubelet[2032]: E1002 19:28:24.334166 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:24.408418 kubelet[2032]: E1002 19:28:24.408372 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:25.409570 kubelet[2032]: E1002 19:28:25.409490 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:26.410091 kubelet[2032]: E1002 19:28:26.410005 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:27.411712 kubelet[2032]: E1002 19:28:27.411634 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:28.412146 kubelet[2032]: E1002 19:28:28.412102 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:29.413579 kubelet[2032]: E1002 19:28:29.413494 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:30.413876 kubelet[2032]: E1002 19:28:30.413799 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:30.765836 kubelet[2032]: E1002 19:28:30.765393 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:28:31.414660 kubelet[2032]: E1002 19:28:31.414614 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:32.416220 kubelet[2032]: E1002 19:28:32.416149 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:33.416655 kubelet[2032]: E1002 19:28:33.416611 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:34.418022 kubelet[2032]: E1002 19:28:34.417988 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:35.418922 kubelet[2032]: E1002 19:28:35.418862 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:36.419681 kubelet[2032]: E1002 19:28:36.419625 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:37.419967 kubelet[2032]: E1002 19:28:37.419933 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:38.421355 kubelet[2032]: E1002 19:28:38.421321 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:39.422884 kubelet[2032]: E1002 19:28:39.422821 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:40.423252 kubelet[2032]: E1002 19:28:40.423216 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:41.424702 kubelet[2032]: E1002 19:28:41.424665 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:42.426321 kubelet[2032]: E1002 19:28:42.426286 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:43.427253 kubelet[2032]: E1002 19:28:43.427219 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:44.333809 kubelet[2032]: E1002 19:28:44.333747 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:44.428168 kubelet[2032]: E1002 19:28:44.428138 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:44.765102 kubelet[2032]: E1002 19:28:44.765014 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:28:45.428844 kubelet[2032]: E1002 19:28:45.428807 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:46.429916 kubelet[2032]: E1002 19:28:46.429881 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:47.431616 kubelet[2032]: E1002 19:28:47.431581 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:48.432716 kubelet[2032]: E1002 19:28:48.432662 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:49.433552 kubelet[2032]: E1002 19:28:49.433507 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:50.434921 kubelet[2032]: E1002 19:28:50.434865 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:51.435681 kubelet[2032]: E1002 19:28:51.435617 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:52.442207 kubelet[2032]: E1002 19:28:52.442140 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:53.443017 kubelet[2032]: E1002 19:28:53.442937 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:54.444700 kubelet[2032]: E1002 19:28:54.444625 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:55.445138 kubelet[2032]: E1002 19:28:55.445044 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:55.769220 env[1571]: time="2023-10-02T19:28:55.768795483Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:28:55.785564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219490778.mount: Deactivated successfully. Oct 2 19:28:55.795846 env[1571]: time="2023-10-02T19:28:55.795780424Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\"" Oct 2 19:28:55.797301 env[1571]: time="2023-10-02T19:28:55.797254529Z" level=info msg="StartContainer for \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\"" Oct 2 19:28:55.842205 systemd[1]: Started cri-containerd-ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2.scope. Oct 2 19:28:55.883186 systemd[1]: cri-containerd-ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2.scope: Deactivated successfully. Oct 2 19:28:55.900099 env[1571]: time="2023-10-02T19:28:55.900010529Z" level=info msg="shim disconnected" id=ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2 Oct 2 19:28:55.900514 env[1571]: time="2023-10-02T19:28:55.900481297Z" level=warning msg="cleaning up after shim disconnected" id=ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2 namespace=k8s.io Oct 2 19:28:55.900646 env[1571]: time="2023-10-02T19:28:55.900618376Z" level=info msg="cleaning up dead shim" Oct 2 19:28:55.925685 env[1571]: time="2023-10-02T19:28:55.925619707Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:28:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2792 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:28:55Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:28:55.926412 env[1571]: time="2023-10-02T19:28:55.926333647Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:28:55.927251 env[1571]: time="2023-10-02T19:28:55.927200182Z" level=error msg="Failed to pipe stdout of container \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\"" error="reading from a closed fifo" Oct 2 19:28:55.927503 env[1571]: time="2023-10-02T19:28:55.927445022Z" level=error msg="Failed to pipe stderr of container \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\"" error="reading from a closed fifo" Oct 2 19:28:55.932724 env[1571]: time="2023-10-02T19:28:55.932640630Z" level=error msg="StartContainer for \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:28:55.933320 kubelet[2032]: E1002 19:28:55.933275 2032 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2" Oct 2 19:28:55.933593 kubelet[2032]: E1002 19:28:55.933456 2032 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:28:55.933593 kubelet[2032]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:28:55.933593 kubelet[2032]: rm /hostbin/cilium-mount Oct 2 19:28:55.933593 kubelet[2032]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vcnfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:28:55.933973 kubelet[2032]: E1002 19:28:55.933524 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:28:56.037962 kubelet[2032]: I1002 19:28:56.036171 2032 scope.go:115] "RemoveContainer" containerID="124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb" Oct 2 19:28:56.037962 kubelet[2032]: I1002 19:28:56.036620 2032 scope.go:115] "RemoveContainer" containerID="124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb" Oct 2 19:28:56.039421 env[1571]: time="2023-10-02T19:28:56.039365084Z" level=info msg="RemoveContainer for \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\"" Oct 2 19:28:56.041947 env[1571]: time="2023-10-02T19:28:56.041876789Z" level=info msg="RemoveContainer for \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\"" Oct 2 19:28:56.042163 env[1571]: time="2023-10-02T19:28:56.042034424Z" level=error msg="RemoveContainer for \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\" failed" error="failed to set removing state for container \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\": container is already in removing state" Oct 2 19:28:56.042590 kubelet[2032]: E1002 19:28:56.042558 2032 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\": container is already in removing state" containerID="124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb" Oct 2 19:28:56.042726 kubelet[2032]: E1002 19:28:56.042639 2032 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb": container is already in removing state; Skipping pod "cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)" Oct 2 19:28:56.043305 kubelet[2032]: E1002 19:28:56.043230 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:28:56.046455 env[1571]: time="2023-10-02T19:28:56.046385583Z" level=info msg="RemoveContainer for \"124e0d3c46f527184bdcc3379bd0757a964e5d37ff748f123a36c9a5371e0ddb\" returns successfully" Oct 2 19:28:56.446179 kubelet[2032]: E1002 19:28:56.446141 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:56.781448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2-rootfs.mount: Deactivated successfully. Oct 2 19:28:57.447830 kubelet[2032]: E1002 19:28:57.447763 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:58.449099 kubelet[2032]: E1002 19:28:58.449066 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:59.006034 kubelet[2032]: W1002 19:28:59.005964 2032 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c34d84_deae_4d38_9f07_7825918c3d74.slice/cri-containerd-ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2.scope WatchSource:0}: task ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2 not found: not found Oct 2 19:28:59.450692 kubelet[2032]: E1002 19:28:59.450638 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:00.451803 kubelet[2032]: E1002 19:29:00.451727 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:01.452808 kubelet[2032]: E1002 19:29:01.452749 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:02.453508 kubelet[2032]: E1002 19:29:02.453475 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:03.456162 kubelet[2032]: E1002 19:29:03.456108 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:04.334085 kubelet[2032]: E1002 19:29:04.334021 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:04.373901 kubelet[2032]: E1002 19:29:04.373848 2032 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:29:04.456711 kubelet[2032]: E1002 19:29:04.456647 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:04.508685 kubelet[2032]: E1002 19:29:04.508650 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:05.457260 kubelet[2032]: E1002 19:29:05.457194 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:06.458252 kubelet[2032]: E1002 19:29:06.458207 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:07.459776 kubelet[2032]: E1002 19:29:07.459730 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:08.460714 kubelet[2032]: E1002 19:29:08.460668 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:09.461447 kubelet[2032]: E1002 19:29:09.461401 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:09.510458 kubelet[2032]: E1002 19:29:09.510426 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:10.462865 kubelet[2032]: E1002 19:29:10.462807 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:11.464010 kubelet[2032]: E1002 19:29:11.463964 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:11.765693 kubelet[2032]: E1002 19:29:11.765559 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:29:12.465554 kubelet[2032]: E1002 19:29:12.465514 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:13.467236 kubelet[2032]: E1002 19:29:13.467164 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:14.467445 kubelet[2032]: E1002 19:29:14.467403 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:14.511630 kubelet[2032]: E1002 19:29:14.511585 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:15.468888 kubelet[2032]: E1002 19:29:15.468841 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:16.470127 kubelet[2032]: E1002 19:29:16.470083 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:17.470896 kubelet[2032]: E1002 19:29:17.470830 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:18.471675 kubelet[2032]: E1002 19:29:18.471629 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:19.472628 kubelet[2032]: E1002 19:29:19.472558 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:19.512893 kubelet[2032]: E1002 19:29:19.512844 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:20.473522 kubelet[2032]: E1002 19:29:20.473451 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:21.474033 kubelet[2032]: E1002 19:29:21.473968 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:22.475020 kubelet[2032]: E1002 19:29:22.474979 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:23.476095 kubelet[2032]: E1002 19:29:23.476019 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.333843 kubelet[2032]: E1002 19:29:24.333797 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.476927 kubelet[2032]: E1002 19:29:24.476853 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.513560 kubelet[2032]: E1002 19:29:24.513505 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:25.477127 kubelet[2032]: E1002 19:29:25.477077 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:26.478636 kubelet[2032]: E1002 19:29:26.478563 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:26.766165 kubelet[2032]: E1002 19:29:26.765753 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:29:27.479834 kubelet[2032]: E1002 19:29:27.479761 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:28.480942 kubelet[2032]: E1002 19:29:28.480867 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:29.482013 kubelet[2032]: E1002 19:29:29.481944 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:29.514926 kubelet[2032]: E1002 19:29:29.514875 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:30.482701 kubelet[2032]: E1002 19:29:30.482655 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:31.483778 kubelet[2032]: E1002 19:29:31.483734 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:32.484920 kubelet[2032]: E1002 19:29:32.484851 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:33.485196 kubelet[2032]: E1002 19:29:33.485126 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:34.485654 kubelet[2032]: E1002 19:29:34.485606 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:34.516365 kubelet[2032]: E1002 19:29:34.516300 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:35.487013 kubelet[2032]: E1002 19:29:35.486941 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:36.487242 kubelet[2032]: E1002 19:29:36.487175 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:37.488206 kubelet[2032]: E1002 19:29:37.488155 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:38.489354 kubelet[2032]: E1002 19:29:38.489307 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:39.490620 kubelet[2032]: E1002 19:29:39.490569 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:39.517531 kubelet[2032]: E1002 19:29:39.517492 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:40.491855 kubelet[2032]: E1002 19:29:40.491786 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:40.766034 kubelet[2032]: E1002 19:29:40.765648 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:29:41.492981 kubelet[2032]: E1002 19:29:41.492916 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:42.493321 kubelet[2032]: E1002 19:29:42.493250 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:43.494272 kubelet[2032]: E1002 19:29:43.494228 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.333703 kubelet[2032]: E1002 19:29:44.333642 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.495927 kubelet[2032]: E1002 19:29:44.495876 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.518108 kubelet[2032]: E1002 19:29:44.518042 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:45.497009 kubelet[2032]: E1002 19:29:45.496865 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:46.498000 kubelet[2032]: E1002 19:29:46.497961 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:47.499574 kubelet[2032]: E1002 19:29:47.499466 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:48.500457 kubelet[2032]: E1002 19:29:48.500392 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:49.501247 kubelet[2032]: E1002 19:29:49.501203 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:49.519385 kubelet[2032]: E1002 19:29:49.519335 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:50.502204 kubelet[2032]: E1002 19:29:50.502167 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:51.502965 kubelet[2032]: E1002 19:29:51.502895 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:51.765303 kubelet[2032]: E1002 19:29:51.764900 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:29:52.503629 kubelet[2032]: E1002 19:29:52.503590 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:53.504592 kubelet[2032]: E1002 19:29:53.504523 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:54.505632 kubelet[2032]: E1002 19:29:54.505595 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:54.520425 kubelet[2032]: E1002 19:29:54.520253 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:55.506905 kubelet[2032]: E1002 19:29:55.506854 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:56.508669 kubelet[2032]: E1002 19:29:56.508630 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:57.509771 kubelet[2032]: E1002 19:29:57.509734 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:58.511273 kubelet[2032]: E1002 19:29:58.511236 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:59.512397 kubelet[2032]: E1002 19:29:59.512335 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:59.521818 kubelet[2032]: E1002 19:29:59.521788 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:00.513146 kubelet[2032]: E1002 19:30:00.513089 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:01.513321 kubelet[2032]: E1002 19:30:01.513262 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:02.514062 kubelet[2032]: E1002 19:30:02.513998 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:03.514287 kubelet[2032]: E1002 19:30:03.514247 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.333765 kubelet[2032]: E1002 19:30:04.333713 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.515176 kubelet[2032]: E1002 19:30:04.515117 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.522769 kubelet[2032]: E1002 19:30:04.522732 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:04.766198 kubelet[2032]: E1002 19:30:04.766144 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:30:05.515824 kubelet[2032]: E1002 19:30:05.515786 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:06.516830 kubelet[2032]: E1002 19:30:06.516772 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:07.517249 kubelet[2032]: E1002 19:30:07.517208 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:08.518410 kubelet[2032]: E1002 19:30:08.518373 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:09.520075 kubelet[2032]: E1002 19:30:09.519987 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:09.523830 kubelet[2032]: E1002 19:30:09.523780 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:10.521014 kubelet[2032]: E1002 19:30:10.520974 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:11.522420 kubelet[2032]: E1002 19:30:11.522353 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:12.523183 kubelet[2032]: E1002 19:30:12.523125 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:13.523490 kubelet[2032]: E1002 19:30:13.523429 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:14.523727 kubelet[2032]: E1002 19:30:14.523693 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:14.525231 kubelet[2032]: E1002 19:30:14.525193 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:15.525121 kubelet[2032]: E1002 19:30:15.525073 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:16.525944 kubelet[2032]: E1002 19:30:16.525879 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:16.774345 env[1571]: time="2023-10-02T19:30:16.774275909Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:30:16.789607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782638657.mount: Deactivated successfully. Oct 2 19:30:16.800591 env[1571]: time="2023-10-02T19:30:16.800507784Z" level=info msg="CreateContainer within sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911\"" Oct 2 19:30:16.801439 env[1571]: time="2023-10-02T19:30:16.801372886Z" level=info msg="StartContainer for \"d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911\"" Oct 2 19:30:16.856000 systemd[1]: Started cri-containerd-d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911.scope. Oct 2 19:30:16.888376 systemd[1]: cri-containerd-d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911.scope: Deactivated successfully. Oct 2 19:30:16.910259 env[1571]: time="2023-10-02T19:30:16.910188124Z" level=info msg="shim disconnected" id=d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911 Oct 2 19:30:16.910623 env[1571]: time="2023-10-02T19:30:16.910588753Z" level=warning msg="cleaning up after shim disconnected" id=d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911 namespace=k8s.io Oct 2 19:30:16.910764 env[1571]: time="2023-10-02T19:30:16.910736433Z" level=info msg="cleaning up dead shim" Oct 2 19:30:16.939232 env[1571]: time="2023-10-02T19:30:16.936749117Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2838 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:16.939718 env[1571]: time="2023-10-02T19:30:16.939625333Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:30:16.942983 env[1571]: time="2023-10-02T19:30:16.942916615Z" level=error msg="Failed to pipe stdout of container \"d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911\"" error="reading from a closed fifo" Oct 2 19:30:16.943272 env[1571]: time="2023-10-02T19:30:16.943192930Z" level=error msg="Failed to pipe stderr of container \"d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911\"" error="reading from a closed fifo" Oct 2 19:30:16.945385 env[1571]: time="2023-10-02T19:30:16.945280296Z" level=error msg="StartContainer for \"d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:16.945781 kubelet[2032]: E1002 19:30:16.945730 2032 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911" Oct 2 19:30:16.946207 kubelet[2032]: E1002 19:30:16.946169 2032 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:16.946207 kubelet[2032]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:16.946207 kubelet[2032]: rm /hostbin/cilium-mount Oct 2 19:30:16.946207 kubelet[2032]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vcnfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:16.946607 kubelet[2032]: E1002 19:30:16.946573 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:30:17.200355 kubelet[2032]: I1002 19:30:17.200307 2032 scope.go:115] "RemoveContainer" containerID="ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2" Oct 2 19:30:17.201954 kubelet[2032]: I1002 19:30:17.201890 2032 scope.go:115] "RemoveContainer" containerID="ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2" Oct 2 19:30:17.203414 env[1571]: time="2023-10-02T19:30:17.203367762Z" level=info msg="RemoveContainer for \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\"" Oct 2 19:30:17.203974 env[1571]: time="2023-10-02T19:30:17.203746970Z" level=info msg="RemoveContainer for \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\"" Oct 2 19:30:17.204228 env[1571]: time="2023-10-02T19:30:17.204159480Z" level=error msg="RemoveContainer for \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\" failed" error="failed to set removing state for container \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\": container is already in removing state" Oct 2 19:30:17.204694 kubelet[2032]: E1002 19:30:17.204653 2032 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\": container is already in removing state" containerID="ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2" Oct 2 19:30:17.204872 kubelet[2032]: I1002 19:30:17.204718 2032 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2} err="rpc error: code = Unknown desc = failed to set removing state for container \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\": container is already in removing state" Oct 2 19:30:17.207849 env[1571]: time="2023-10-02T19:30:17.207788628Z" level=info msg="RemoveContainer for \"ca1dfe62bb36e2d4e0594ceef92a51f41420541773d98741f7756c8523009af2\" returns successfully" Oct 2 19:30:17.209399 kubelet[2032]: E1002 19:30:17.209347 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:30:17.526502 kubelet[2032]: E1002 19:30:17.526377 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:17.785044 systemd[1]: run-containerd-runc-k8s.io-d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911-runc.i8Jzi3.mount: Deactivated successfully. Oct 2 19:30:17.785244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911-rootfs.mount: Deactivated successfully. Oct 2 19:30:18.527609 kubelet[2032]: E1002 19:30:18.527548 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:19.526984 kubelet[2032]: E1002 19:30:19.526933 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:19.528501 kubelet[2032]: E1002 19:30:19.528458 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:20.015479 kubelet[2032]: W1002 19:30:20.015422 2032 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c34d84_deae_4d38_9f07_7825918c3d74.slice/cri-containerd-d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911.scope WatchSource:0}: task d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911 not found: not found Oct 2 19:30:20.528818 kubelet[2032]: E1002 19:30:20.528756 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:21.529180 kubelet[2032]: E1002 19:30:21.529115 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:22.529706 kubelet[2032]: E1002 19:30:22.529639 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:23.530272 kubelet[2032]: E1002 19:30:23.530203 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:24.333691 kubelet[2032]: E1002 19:30:24.333633 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:24.527969 kubelet[2032]: E1002 19:30:24.527919 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:24.531333 kubelet[2032]: E1002 19:30:24.531291 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:25.531685 kubelet[2032]: E1002 19:30:25.531622 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:26.532170 kubelet[2032]: E1002 19:30:26.532132 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:27.533897 kubelet[2032]: E1002 19:30:27.533837 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:28.534737 kubelet[2032]: E1002 19:30:28.534702 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:29.529278 kubelet[2032]: E1002 19:30:29.529227 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:29.535448 kubelet[2032]: E1002 19:30:29.535394 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:29.765863 kubelet[2032]: E1002 19:30:29.765810 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-rhdkw_kube-system(f2c34d84-deae-4d38-9f07-7825918c3d74)\"" pod="kube-system/cilium-rhdkw" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 Oct 2 19:30:30.536298 kubelet[2032]: E1002 19:30:30.536261 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:31.537262 kubelet[2032]: E1002 19:30:31.537200 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:32.538415 kubelet[2032]: E1002 19:30:32.538350 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:33.539251 kubelet[2032]: E1002 19:30:33.539193 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:34.375467 env[1571]: time="2023-10-02T19:30:34.375397848Z" level=info msg="StopPodSandbox for \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\"" Oct 2 19:30:34.376235 env[1571]: time="2023-10-02T19:30:34.376172621Z" level=info msg="Container to stop \"d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:30:34.378727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c-shm.mount: Deactivated successfully. Oct 2 19:30:34.396365 systemd[1]: cri-containerd-742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c.scope: Deactivated successfully. Oct 2 19:30:34.399132 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:30:34.399293 kernel: audit: type=1334 audit(1696275034.396:720): prog-id=70 op=UNLOAD Oct 2 19:30:34.396000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:30:34.403000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:30:34.407239 kernel: audit: type=1334 audit(1696275034.403:721): prog-id=73 op=UNLOAD Oct 2 19:30:34.447030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c-rootfs.mount: Deactivated successfully. Oct 2 19:30:34.464381 env[1571]: time="2023-10-02T19:30:34.464299069Z" level=info msg="shim disconnected" id=742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c Oct 2 19:30:34.464670 env[1571]: time="2023-10-02T19:30:34.464375636Z" level=warning msg="cleaning up after shim disconnected" id=742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c namespace=k8s.io Oct 2 19:30:34.464670 env[1571]: time="2023-10-02T19:30:34.464402803Z" level=info msg="cleaning up dead shim" Oct 2 19:30:34.489240 env[1571]: time="2023-10-02T19:30:34.489177270Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2869 runtime=io.containerd.runc.v2\n" Oct 2 19:30:34.489778 env[1571]: time="2023-10-02T19:30:34.489730307Z" level=info msg="TearDown network for sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" successfully" Oct 2 19:30:34.489778 env[1571]: time="2023-10-02T19:30:34.489778040Z" level=info msg="StopPodSandbox for \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" returns successfully" Oct 2 19:30:34.530662 kubelet[2032]: E1002 19:30:34.530589 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:34.540218 kubelet[2032]: E1002 19:30:34.540172 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:34.636993 kubelet[2032]: I1002 19:30:34.636863 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-bpf-maps\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.636993 kubelet[2032]: I1002 19:30:34.636951 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-cgroup\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.637392 kubelet[2032]: I1002 19:30:34.637286 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.637392 kubelet[2032]: I1002 19:30:34.637359 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.637629 kubelet[2032]: I1002 19:30:34.637589 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.641084 kubelet[2032]: I1002 19:30:34.637752 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-run\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641084 kubelet[2032]: I1002 19:30:34.637823 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-hostproc\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641084 kubelet[2032]: I1002 19:30:34.637894 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-hostproc" (OuterVolumeSpecName: "hostproc") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.641084 kubelet[2032]: I1002 19:30:34.638556 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2c34d84-deae-4d38-9f07-7825918c3d74-clustermesh-secrets\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641084 kubelet[2032]: I1002 19:30:34.638618 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-config-path\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641084 kubelet[2032]: I1002 19:30:34.638670 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2c34d84-deae-4d38-9f07-7825918c3d74-hubble-tls\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641571 kubelet[2032]: I1002 19:30:34.638766 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-host-proc-sys-net\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641571 kubelet[2032]: I1002 19:30:34.638811 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-lib-modules\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641571 kubelet[2032]: I1002 19:30:34.638851 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-etc-cni-netd\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641571 kubelet[2032]: I1002 19:30:34.638893 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-host-proc-sys-kernel\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641571 kubelet[2032]: I1002 19:30:34.638938 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcnfr\" (UniqueName: \"kubernetes.io/projected/f2c34d84-deae-4d38-9f07-7825918c3d74-kube-api-access-vcnfr\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641571 kubelet[2032]: I1002 19:30:34.638976 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cni-path\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641925 kubelet[2032]: I1002 19:30:34.639015 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-xtables-lock\") pod \"f2c34d84-deae-4d38-9f07-7825918c3d74\" (UID: \"f2c34d84-deae-4d38-9f07-7825918c3d74\") " Oct 2 19:30:34.641925 kubelet[2032]: I1002 19:30:34.639104 2032 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-bpf-maps\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.641925 kubelet[2032]: I1002 19:30:34.639133 2032 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-cgroup\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.641925 kubelet[2032]: I1002 19:30:34.639158 2032 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-run\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.641925 kubelet[2032]: I1002 19:30:34.639202 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.641925 kubelet[2032]: W1002 19:30:34.639449 2032 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f2c34d84-deae-4d38-9f07-7825918c3d74/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:30:34.642433 kubelet[2032]: I1002 19:30:34.642392 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.642603 kubelet[2032]: I1002 19:30:34.642576 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.642758 kubelet[2032]: I1002 19:30:34.642730 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.643338 kubelet[2032]: I1002 19:30:34.643295 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.643544 kubelet[2032]: I1002 19:30:34.643514 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cni-path" (OuterVolumeSpecName: "cni-path") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.647392 kubelet[2032]: I1002 19:30:34.647344 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:30:34.655940 systemd[1]: var-lib-kubelet-pods-f2c34d84\x2ddeae\x2d4d38\x2d9f07\x2d7825918c3d74-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:30:34.660540 systemd[1]: var-lib-kubelet-pods-f2c34d84\x2ddeae\x2d4d38\x2d9f07\x2d7825918c3d74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvcnfr.mount: Deactivated successfully. Oct 2 19:30:34.662332 kubelet[2032]: I1002 19:30:34.662287 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2c34d84-deae-4d38-9f07-7825918c3d74-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:30:34.665374 systemd[1]: var-lib-kubelet-pods-f2c34d84\x2ddeae\x2d4d38\x2d9f07\x2d7825918c3d74-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:30:34.667278 kubelet[2032]: I1002 19:30:34.667230 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c34d84-deae-4d38-9f07-7825918c3d74-kube-api-access-vcnfr" (OuterVolumeSpecName: "kube-api-access-vcnfr") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "kube-api-access-vcnfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:30:34.668089 kubelet[2032]: I1002 19:30:34.667994 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c34d84-deae-4d38-9f07-7825918c3d74-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f2c34d84-deae-4d38-9f07-7825918c3d74" (UID: "f2c34d84-deae-4d38-9f07-7825918c3d74"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:30:34.740224 kubelet[2032]: I1002 19:30:34.740187 2032 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-hostproc\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.740428 kubelet[2032]: I1002 19:30:34.740406 2032 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2c34d84-deae-4d38-9f07-7825918c3d74-clustermesh-secrets\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.740564 kubelet[2032]: I1002 19:30:34.740545 2032 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2c34d84-deae-4d38-9f07-7825918c3d74-cilium-config-path\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.740691 kubelet[2032]: I1002 19:30:34.740671 2032 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-host-proc-sys-net\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.740888 kubelet[2032]: I1002 19:30:34.740867 2032 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-lib-modules\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.741016 kubelet[2032]: I1002 19:30:34.740996 2032 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2c34d84-deae-4d38-9f07-7825918c3d74-hubble-tls\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.741189 kubelet[2032]: I1002 19:30:34.741169 2032 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-vcnfr\" (UniqueName: \"kubernetes.io/projected/f2c34d84-deae-4d38-9f07-7825918c3d74-kube-api-access-vcnfr\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.741317 kubelet[2032]: I1002 19:30:34.741297 2032 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-cni-path\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.741445 kubelet[2032]: I1002 19:30:34.741426 2032 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-xtables-lock\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.741571 kubelet[2032]: I1002 19:30:34.741551 2032 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-etc-cni-netd\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.741703 kubelet[2032]: I1002 19:30:34.741684 2032 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2c34d84-deae-4d38-9f07-7825918c3d74-host-proc-sys-kernel\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:30:34.775836 systemd[1]: Removed slice kubepods-burstable-podf2c34d84_deae_4d38_9f07_7825918c3d74.slice. Oct 2 19:30:35.238432 kubelet[2032]: I1002 19:30:35.238398 2032 scope.go:115] "RemoveContainer" containerID="d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911" Oct 2 19:30:35.243120 env[1571]: time="2023-10-02T19:30:35.242993972Z" level=info msg="RemoveContainer for \"d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911\"" Oct 2 19:30:35.247680 env[1571]: time="2023-10-02T19:30:35.247620921Z" level=info msg="RemoveContainer for \"d58809c6ea674e11b178eefdf94b7883c8801304dbfbbecab4e993206b931911\" returns successfully" Oct 2 19:30:35.540679 kubelet[2032]: E1002 19:30:35.540542 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:36.541655 kubelet[2032]: E1002 19:30:36.541622 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:36.769659 kubelet[2032]: I1002 19:30:36.769625 2032 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f2c34d84-deae-4d38-9f07-7825918c3d74 path="/var/lib/kubelet/pods/f2c34d84-deae-4d38-9f07-7825918c3d74/volumes" Oct 2 19:30:37.542391 kubelet[2032]: E1002 19:30:37.542352 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:38.120661 kubelet[2032]: I1002 19:30:38.120625 2032 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:30:38.120898 kubelet[2032]: E1002 19:30:38.120876 2032 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.121114 kubelet[2032]: E1002 19:30:38.121039 2032 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.121277 kubelet[2032]: E1002 19:30:38.121257 2032 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.121418 kubelet[2032]: E1002 19:30:38.121398 2032 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.121556 kubelet[2032]: E1002 19:30:38.121537 2032 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.121700 kubelet[2032]: E1002 19:30:38.121681 2032 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.121843 kubelet[2032]: I1002 19:30:38.121824 2032 memory_manager.go:346] "RemoveStaleState removing state" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.121977 kubelet[2032]: I1002 19:30:38.121958 2032 memory_manager.go:346] "RemoveStaleState removing state" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.122147 kubelet[2032]: I1002 19:30:38.122127 2032 memory_manager.go:346] "RemoveStaleState removing state" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.122273 kubelet[2032]: I1002 19:30:38.122254 2032 memory_manager.go:346] "RemoveStaleState removing state" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.131240 systemd[1]: Created slice kubepods-besteffort-pod8a29ac4f_ea0a_4529_b8b6_5994708c1610.slice. Oct 2 19:30:38.141187 kubelet[2032]: I1002 19:30:38.141140 2032 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:30:38.141385 kubelet[2032]: I1002 19:30:38.141234 2032 memory_manager.go:346] "RemoveStaleState removing state" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.141385 kubelet[2032]: I1002 19:30:38.141254 2032 memory_manager.go:346] "RemoveStaleState removing state" podUID="f2c34d84-deae-4d38-9f07-7825918c3d74" containerName="mount-cgroup" Oct 2 19:30:38.150990 systemd[1]: Created slice kubepods-burstable-pod937fe63d_c848_437e_92a0_ef4d4b86f794.slice. Oct 2 19:30:38.262257 kubelet[2032]: I1002 19:30:38.262199 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-cgroup\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.262614 kubelet[2032]: I1002 19:30:38.262592 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-host-proc-sys-net\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.262856 kubelet[2032]: I1002 19:30:38.262836 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lgcn\" (UniqueName: \"kubernetes.io/projected/8a29ac4f-ea0a-4529-b8b6-5994708c1610-kube-api-access-2lgcn\") pod \"cilium-operator-f59cbd8c6-zr99c\" (UID: \"8a29ac4f-ea0a-4529-b8b6-5994708c1610\") " pod="kube-system/cilium-operator-f59cbd8c6-zr99c" Oct 2 19:30:38.263125 kubelet[2032]: I1002 19:30:38.263091 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-run\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.263366 kubelet[2032]: I1002 19:30:38.263346 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-hostproc\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.263590 kubelet[2032]: I1002 19:30:38.263570 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9cbv\" (UniqueName: \"kubernetes.io/projected/937fe63d-c848-437e-92a0-ef4d4b86f794-kube-api-access-g9cbv\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.263823 kubelet[2032]: I1002 19:30:38.263803 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a29ac4f-ea0a-4529-b8b6-5994708c1610-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-zr99c\" (UID: \"8a29ac4f-ea0a-4529-b8b6-5994708c1610\") " pod="kube-system/cilium-operator-f59cbd8c6-zr99c" Oct 2 19:30:38.264114 kubelet[2032]: I1002 19:30:38.264035 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-xtables-lock\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.264342 kubelet[2032]: I1002 19:30:38.264297 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-host-proc-sys-kernel\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.264576 kubelet[2032]: I1002 19:30:38.264555 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-bpf-maps\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.264803 kubelet[2032]: I1002 19:30:38.264784 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-lib-modules\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.265023 kubelet[2032]: I1002 19:30:38.265003 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/937fe63d-c848-437e-92a0-ef4d4b86f794-clustermesh-secrets\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.265265 kubelet[2032]: I1002 19:30:38.265246 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/937fe63d-c848-437e-92a0-ef4d4b86f794-hubble-tls\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.265557 kubelet[2032]: I1002 19:30:38.265537 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cni-path\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.265788 kubelet[2032]: I1002 19:30:38.265765 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-etc-cni-netd\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.266028 kubelet[2032]: I1002 19:30:38.266008 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-config-path\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.266238 kubelet[2032]: I1002 19:30:38.266219 2032 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-ipsec-secrets\") pod \"cilium-zslsl\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " pod="kube-system/cilium-zslsl" Oct 2 19:30:38.439503 env[1571]: time="2023-10-02T19:30:38.438690670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-zr99c,Uid:8a29ac4f-ea0a-4529-b8b6-5994708c1610,Namespace:kube-system,Attempt:0,}" Oct 2 19:30:38.466091 env[1571]: time="2023-10-02T19:30:38.465986542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zslsl,Uid:937fe63d-c848-437e-92a0-ef4d4b86f794,Namespace:kube-system,Attempt:0,}" Oct 2 19:30:38.481526 env[1571]: time="2023-10-02T19:30:38.481393320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:30:38.481698 env[1571]: time="2023-10-02T19:30:38.481522062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:30:38.481698 env[1571]: time="2023-10-02T19:30:38.481585310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:30:38.482118 env[1571]: time="2023-10-02T19:30:38.482003934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa pid=2900 runtime=io.containerd.runc.v2 Oct 2 19:30:38.511886 env[1571]: time="2023-10-02T19:30:38.511764843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:30:38.512202 env[1571]: time="2023-10-02T19:30:38.512148284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:30:38.512416 env[1571]: time="2023-10-02T19:30:38.512368653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:30:38.512930 env[1571]: time="2023-10-02T19:30:38.512841165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27 pid=2920 runtime=io.containerd.runc.v2 Oct 2 19:30:38.517327 systemd[1]: Started cri-containerd-594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa.scope. Oct 2 19:30:38.543515 kubelet[2032]: E1002 19:30:38.543424 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:38.561539 systemd[1]: Started cri-containerd-890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27.scope. Oct 2 19:30:38.568000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.568000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.587126 kernel: audit: type=1400 audit(1696275038.568:722): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.587277 kernel: audit: type=1400 audit(1696275038.568:723): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.587327 kernel: audit: type=1400 audit(1696275038.568:724): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.568000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.568000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.602304 kernel: audit: type=1400 audit(1696275038.568:725): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.568000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.612046 kernel: audit: type=1400 audit(1696275038.568:726): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.568000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.620654 kernel: audit: type=1400 audit(1696275038.568:727): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.568000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.568000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.629142 kernel: audit: type=1400 audit(1696275038.568:728): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.568000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.637183 kernel: audit: type=1400 audit(1696275038.568:729): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.577000 audit: BPF prog-id=81 op=LOAD Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2900 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539343130366537386161383262643839346161626165653732656335 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2900 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539343130366537386161383262643839346161626165653732656335 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit: BPF prog-id=82 op=LOAD Oct 2 19:30:38.584000 audit[2911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2900 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539343130366537386161383262643839346161626165653732656335 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.584000 audit: BPF prog-id=83 op=LOAD Oct 2 19:30:38.584000 audit[2911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2900 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539343130366537386161383262643839346161626165653732656335 Oct 2 19:30:38.585000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:30:38.585000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:30:38.585000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.585000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.585000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.585000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.585000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.585000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.585000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.585000 audit[2911]: AVC avc: denied { perfmon } for pid=2911 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.585000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.585000 audit[2911]: AVC avc: denied { bpf } for pid=2911 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.585000 audit: BPF prog-id=84 op=LOAD Oct 2 19:30:38.585000 audit[2911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2900 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539343130366537386161383262643839346161626165653732656335 Oct 2 19:30:38.649000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.649000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.649000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.649000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.649000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.649000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.649000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.649000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.649000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.649000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.649000 audit: BPF prog-id=85 op=LOAD Oct 2 19:30:38.651000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.651000 audit[2935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2920 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839306530336439336165616132623932623835613339363866353965 Oct 2 19:30:38.652000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.652000 audit[2935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2920 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839306530336439336165616132623932623835613339363866353965 Oct 2 19:30:38.654000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.654000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.654000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.654000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.654000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.654000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.654000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.654000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.654000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.654000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.654000 audit: BPF prog-id=86 op=LOAD Oct 2 19:30:38.654000 audit[2935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2920 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839306530336439336165616132623932623835613339363866353965 Oct 2 19:30:38.655000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.655000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.655000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.655000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.655000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.655000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.655000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.655000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.655000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.655000 audit: BPF prog-id=87 op=LOAD Oct 2 19:30:38.655000 audit[2935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2920 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839306530336439336165616132623932623835613339363866353965 Oct 2 19:30:38.657000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:30:38.657000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:30:38.657000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.657000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.657000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.657000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.657000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.657000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.657000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.657000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.657000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.657000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.657000 audit: BPF prog-id=88 op=LOAD Oct 2 19:30:38.657000 audit[2935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2920 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839306530336439336165616132623932623835613339363866353965 Oct 2 19:30:38.679339 env[1571]: time="2023-10-02T19:30:38.679254692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-zr99c,Uid:8a29ac4f-ea0a-4529-b8b6-5994708c1610,Namespace:kube-system,Attempt:0,} returns sandbox id \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\"" Oct 2 19:30:38.681962 env[1571]: time="2023-10-02T19:30:38.681825072Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:30:38.707554 env[1571]: time="2023-10-02T19:30:38.703589194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zslsl,Uid:937fe63d-c848-437e-92a0-ef4d4b86f794,Namespace:kube-system,Attempt:0,} returns sandbox id \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\"" Oct 2 19:30:38.712633 env[1571]: time="2023-10-02T19:30:38.712580708Z" level=info msg="CreateContainer within sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:30:38.738500 env[1571]: time="2023-10-02T19:30:38.738435535Z" level=info msg="CreateContainer within sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220\"" Oct 2 19:30:38.739784 env[1571]: time="2023-10-02T19:30:38.739723143Z" level=info msg="StartContainer for \"a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220\"" Oct 2 19:30:38.786233 systemd[1]: Started cri-containerd-a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220.scope. Oct 2 19:30:38.822310 systemd[1]: cri-containerd-a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220.scope: Deactivated successfully. Oct 2 19:30:38.854353 env[1571]: time="2023-10-02T19:30:38.854262623Z" level=info msg="shim disconnected" id=a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220 Oct 2 19:30:38.854695 env[1571]: time="2023-10-02T19:30:38.854663187Z" level=warning msg="cleaning up after shim disconnected" id=a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220 namespace=k8s.io Oct 2 19:30:38.854834 env[1571]: time="2023-10-02T19:30:38.854807072Z" level=info msg="cleaning up dead shim" Oct 2 19:30:38.881015 env[1571]: time="2023-10-02T19:30:38.880949813Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3000 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:38.881763 env[1571]: time="2023-10-02T19:30:38.881682784Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Oct 2 19:30:38.882177 env[1571]: time="2023-10-02T19:30:38.882110515Z" level=error msg="Failed to pipe stdout of container \"a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220\"" error="reading from a closed fifo" Oct 2 19:30:38.882325 env[1571]: time="2023-10-02T19:30:38.882135162Z" level=error msg="Failed to pipe stderr of container \"a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220\"" error="reading from a closed fifo" Oct 2 19:30:38.887319 env[1571]: time="2023-10-02T19:30:38.887251372Z" level=error msg="StartContainer for \"a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:38.887622 kubelet[2032]: E1002 19:30:38.887587 2032 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220" Oct 2 19:30:38.887827 kubelet[2032]: E1002 19:30:38.887728 2032 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:38.887827 kubelet[2032]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:38.887827 kubelet[2032]: rm /hostbin/cilium-mount Oct 2 19:30:38.887827 kubelet[2032]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g9cbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:38.888279 kubelet[2032]: E1002 19:30:38.887794 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zslsl" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 Oct 2 19:30:39.256237 env[1571]: time="2023-10-02T19:30:39.256169534Z" level=info msg="CreateContainer within sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:30:39.279006 env[1571]: time="2023-10-02T19:30:39.278944843Z" level=info msg="CreateContainer within sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\"" Oct 2 19:30:39.280577 env[1571]: time="2023-10-02T19:30:39.280528227Z" level=info msg="StartContainer for \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\"" Oct 2 19:30:39.329242 systemd[1]: Started cri-containerd-9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b.scope. Oct 2 19:30:39.357580 systemd[1]: cri-containerd-9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b.scope: Deactivated successfully. Oct 2 19:30:39.374867 env[1571]: time="2023-10-02T19:30:39.374791647Z" level=info msg="shim disconnected" id=9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b Oct 2 19:30:39.375171 env[1571]: time="2023-10-02T19:30:39.374867496Z" level=warning msg="cleaning up after shim disconnected" id=9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b namespace=k8s.io Oct 2 19:30:39.375171 env[1571]: time="2023-10-02T19:30:39.374892526Z" level=info msg="cleaning up dead shim" Oct 2 19:30:39.422081 env[1571]: time="2023-10-02T19:30:39.421975447Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3036 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:39.422551 env[1571]: time="2023-10-02T19:30:39.422455952Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Oct 2 19:30:39.422934 env[1571]: time="2023-10-02T19:30:39.422878920Z" level=error msg="Failed to pipe stdout of container \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\"" error="reading from a closed fifo" Oct 2 19:30:39.426684 env[1571]: time="2023-10-02T19:30:39.426624179Z" level=error msg="Failed to pipe stderr of container \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\"" error="reading from a closed fifo" Oct 2 19:30:39.430726 env[1571]: time="2023-10-02T19:30:39.430664564Z" level=error msg="StartContainer for \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:39.431342 kubelet[2032]: E1002 19:30:39.431299 2032 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b" Oct 2 19:30:39.431542 kubelet[2032]: E1002 19:30:39.431451 2032 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:39.431542 kubelet[2032]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:39.431542 kubelet[2032]: rm /hostbin/cilium-mount Oct 2 19:30:39.431542 kubelet[2032]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g9cbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:39.431825 kubelet[2032]: E1002 19:30:39.431511 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zslsl" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 Oct 2 19:30:39.533484 kubelet[2032]: E1002 19:30:39.531913 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:39.544076 kubelet[2032]: E1002 19:30:39.543942 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:39.908277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348272989.mount: Deactivated successfully. Oct 2 19:30:40.257817 kubelet[2032]: I1002 19:30:40.257214 2032 scope.go:115] "RemoveContainer" containerID="a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220" Oct 2 19:30:40.257817 kubelet[2032]: I1002 19:30:40.257599 2032 scope.go:115] "RemoveContainer" containerID="a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220" Oct 2 19:30:40.259621 env[1571]: time="2023-10-02T19:30:40.259558703Z" level=info msg="RemoveContainer for \"a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220\"" Oct 2 19:30:40.263988 env[1571]: time="2023-10-02T19:30:40.263915747Z" level=info msg="RemoveContainer for \"a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220\" returns successfully" Oct 2 19:30:40.264715 env[1571]: time="2023-10-02T19:30:40.264659797Z" level=info msg="RemoveContainer for \"a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220\"" Oct 2 19:30:40.264849 env[1571]: time="2023-10-02T19:30:40.264716206Z" level=info msg="RemoveContainer for \"a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220\" returns successfully" Oct 2 19:30:40.265429 kubelet[2032]: E1002 19:30:40.265383 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794)\"" pod="kube-system/cilium-zslsl" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 Oct 2 19:30:40.545312 kubelet[2032]: E1002 19:30:40.544452 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:40.899762 env[1571]: time="2023-10-02T19:30:40.899705302Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:40.903921 env[1571]: time="2023-10-02T19:30:40.903861043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:40.906895 env[1571]: time="2023-10-02T19:30:40.906848327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:40.907819 env[1571]: time="2023-10-02T19:30:40.907775236Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 2 19:30:40.911554 env[1571]: time="2023-10-02T19:30:40.911470787Z" level=info msg="CreateContainer within sandbox \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:30:40.929291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606088005.mount: Deactivated successfully. Oct 2 19:30:40.939712 env[1571]: time="2023-10-02T19:30:40.939650880Z" level=info msg="CreateContainer within sandbox \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\"" Oct 2 19:30:40.940976 env[1571]: time="2023-10-02T19:30:40.940919208Z" level=info msg="StartContainer for \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\"" Oct 2 19:30:40.998595 systemd[1]: Started cri-containerd-ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8.scope. Oct 2 19:30:41.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.040081 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:30:41.040231 kernel: audit: type=1400 audit(1696275041.036:758): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.055259 kernel: audit: type=1400 audit(1696275041.039:759): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.055378 kernel: audit: type=1400 audit(1696275041.039:760): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.070721 kernel: audit: type=1400 audit(1696275041.039:761): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.086880 kernel: audit: type=1400 audit(1696275041.039:762): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.086969 kernel: audit: type=1400 audit(1696275041.039:763): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.095539 kernel: audit: type=1400 audit(1696275041.039:764): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.104955 kernel: audit: type=1400 audit(1696275041.039:765): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.112911 kernel: audit: type=1400 audit(1696275041.039:766): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.121268 kernel: audit: type=1400 audit(1696275041.046:767): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit: BPF prog-id=89 op=LOAD Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2900 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:41.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365656238383036633433353639316161643262633365346137356435 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2900 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:41.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365656238383036633433353639316161643262633365346137356435 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.046000 audit: BPF prog-id=90 op=LOAD Oct 2 19:30:41.046000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2900 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:41.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365656238383036633433353639316161643262633365346137356435 Oct 2 19:30:41.054000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.054000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.054000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.054000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.054000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.054000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.054000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.054000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.054000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.054000 audit: BPF prog-id=91 op=LOAD Oct 2 19:30:41.054000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2900 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:41.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365656238383036633433353639316161643262633365346137356435 Oct 2 19:30:41.062000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:30:41.062000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:30:41.062000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.062000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.062000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.062000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.062000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.062000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.062000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.062000 audit[3056]: AVC avc: denied { perfmon } for pid=3056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.062000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.062000 audit[3056]: AVC avc: denied { bpf } for pid=3056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.062000 audit: BPF prog-id=92 op=LOAD Oct 2 19:30:41.062000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2900 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:41.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365656238383036633433353639316161643262633365346137356435 Oct 2 19:30:41.142306 env[1571]: time="2023-10-02T19:30:41.142245143Z" level=info msg="StartContainer for \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\" returns successfully" Oct 2 19:30:41.209000 audit[3067]: AVC avc: denied { map_create } for pid=3067 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c191,c616 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c191,c616 tclass=bpf permissive=0 Oct 2 19:30:41.209000 audit[3067]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=4000609768 a2=48 a3=0 items=0 ppid=2900 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c191,c616 key=(null) Oct 2 19:30:41.209000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:30:41.264525 kubelet[2032]: E1002 19:30:41.264466 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794)\"" pod="kube-system/cilium-zslsl" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 Oct 2 19:30:41.275085 kubelet[2032]: I1002 19:30:41.275019 2032 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-zr99c" podStartSLOduration=-9.223372033579836e+09 pod.CreationTimestamp="2023-10-02 19:30:38 +0000 UTC" firstStartedPulling="2023-10-02 19:30:38.681419732 +0000 UTC m=+215.310670996" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:30:41.27397683 +0000 UTC m=+217.903228118" watchObservedRunningTime="2023-10-02 19:30:41.274940026 +0000 UTC m=+217.904191314" Oct 2 19:30:41.545143 kubelet[2032]: E1002 19:30:41.544984 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:41.925272 systemd[1]: run-containerd-runc-k8s.io-ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8-runc.rc2QC7.mount: Deactivated successfully. Oct 2 19:30:41.964994 kubelet[2032]: W1002 19:30:41.964948 2032 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod937fe63d_c848_437e_92a0_ef4d4b86f794.slice/cri-containerd-a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220.scope WatchSource:0}: container "a16f776e0b5658511ebe2b0d8e5e22f44d4ba3573b8fdda5acd8e43a2f970220" in namespace "k8s.io": not found Oct 2 19:30:42.546478 kubelet[2032]: E1002 19:30:42.546423 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:43.547012 kubelet[2032]: E1002 19:30:43.546976 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:44.334253 kubelet[2032]: E1002 19:30:44.334195 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:44.533435 kubelet[2032]: E1002 19:30:44.533404 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:44.548027 kubelet[2032]: E1002 19:30:44.548004 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:45.078791 kubelet[2032]: W1002 19:30:45.078745 2032 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod937fe63d_c848_437e_92a0_ef4d4b86f794.slice/cri-containerd-9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b.scope WatchSource:0}: task 9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b not found: not found Oct 2 19:30:45.549695 kubelet[2032]: E1002 19:30:45.549654 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:46.550995 kubelet[2032]: E1002 19:30:46.550927 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:47.552171 kubelet[2032]: E1002 19:30:47.552100 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:48.552956 kubelet[2032]: E1002 19:30:48.552918 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:49.535032 kubelet[2032]: E1002 19:30:49.534976 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:49.554407 kubelet[2032]: E1002 19:30:49.554372 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:50.555857 kubelet[2032]: E1002 19:30:50.555789 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:51.556674 kubelet[2032]: E1002 19:30:51.556639 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:52.558151 kubelet[2032]: E1002 19:30:52.558110 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:53.559732 kubelet[2032]: E1002 19:30:53.559669 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:54.536629 kubelet[2032]: E1002 19:30:54.536583 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:54.560129 kubelet[2032]: E1002 19:30:54.560104 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:55.560972 kubelet[2032]: E1002 19:30:55.560934 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:55.768860 env[1571]: time="2023-10-02T19:30:55.768418852Z" level=info msg="CreateContainer within sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:30:55.788710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997367693.mount: Deactivated successfully. Oct 2 19:30:55.799871 env[1571]: time="2023-10-02T19:30:55.799806582Z" level=info msg="CreateContainer within sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\"" Oct 2 19:30:55.801265 env[1571]: time="2023-10-02T19:30:55.801218426Z" level=info msg="StartContainer for \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\"" Oct 2 19:30:55.855406 systemd[1]: Started cri-containerd-342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870.scope. Oct 2 19:30:55.891976 systemd[1]: cri-containerd-342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870.scope: Deactivated successfully. Oct 2 19:30:56.103357 env[1571]: time="2023-10-02T19:30:56.103289787Z" level=info msg="shim disconnected" id=342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870 Oct 2 19:30:56.103839 env[1571]: time="2023-10-02T19:30:56.103793966Z" level=warning msg="cleaning up after shim disconnected" id=342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870 namespace=k8s.io Oct 2 19:30:56.103975 env[1571]: time="2023-10-02T19:30:56.103947130Z" level=info msg="cleaning up dead shim" Oct 2 19:30:56.129304 env[1571]: time="2023-10-02T19:30:56.129237425Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3110 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:56.130007 env[1571]: time="2023-10-02T19:30:56.129927071Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:30:56.132314 env[1571]: time="2023-10-02T19:30:56.132255920Z" level=error msg="Failed to pipe stderr of container \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\"" error="reading from a closed fifo" Oct 2 19:30:56.132737 env[1571]: time="2023-10-02T19:30:56.132664858Z" level=error msg="Failed to pipe stdout of container \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\"" error="reading from a closed fifo" Oct 2 19:30:56.135073 env[1571]: time="2023-10-02T19:30:56.134972444Z" level=error msg="StartContainer for \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:56.136241 kubelet[2032]: E1002 19:30:56.135511 2032 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870" Oct 2 19:30:56.136241 kubelet[2032]: E1002 19:30:56.136139 2032 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:56.136241 kubelet[2032]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:56.136241 kubelet[2032]: rm /hostbin/cilium-mount Oct 2 19:30:56.136612 kubelet[2032]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g9cbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:56.136731 kubelet[2032]: E1002 19:30:56.136204 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zslsl" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 Oct 2 19:30:56.295864 kubelet[2032]: I1002 19:30:56.295132 2032 scope.go:115] "RemoveContainer" containerID="9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b" Oct 2 19:30:56.295864 kubelet[2032]: I1002 19:30:56.295641 2032 scope.go:115] "RemoveContainer" containerID="9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b" Oct 2 19:30:56.298405 env[1571]: time="2023-10-02T19:30:56.298356595Z" level=info msg="RemoveContainer for \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\"" Oct 2 19:30:56.300860 env[1571]: time="2023-10-02T19:30:56.300756662Z" level=info msg="RemoveContainer for \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\"" Oct 2 19:30:56.301211 env[1571]: time="2023-10-02T19:30:56.301100513Z" level=error msg="RemoveContainer for \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\" failed" error="failed to set removing state for container \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\": container is already in removing state" Oct 2 19:30:56.301550 kubelet[2032]: E1002 19:30:56.301523 2032 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\": container is already in removing state" containerID="9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b" Oct 2 19:30:56.301716 kubelet[2032]: E1002 19:30:56.301696 2032 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b": container is already in removing state; Skipping pod "cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794)" Oct 2 19:30:56.302373 kubelet[2032]: E1002 19:30:56.302347 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794)\"" pod="kube-system/cilium-zslsl" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 Oct 2 19:30:56.307040 env[1571]: time="2023-10-02T19:30:56.306965692Z" level=info msg="RemoveContainer for \"9a2c3661db1c5b91b8a346b38735ba793a3a02ed27cb03b71815a035f4a1d45b\" returns successfully" Oct 2 19:30:56.563033 kubelet[2032]: E1002 19:30:56.561588 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:56.783793 systemd[1]: run-containerd-runc-k8s.io-342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870-runc.Jo8GgI.mount: Deactivated successfully. Oct 2 19:30:56.783963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870-rootfs.mount: Deactivated successfully. Oct 2 19:30:57.564788 kubelet[2032]: E1002 19:30:57.564727 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:58.565653 kubelet[2032]: E1002 19:30:58.565582 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:59.209849 kubelet[2032]: W1002 19:30:59.209803 2032 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod937fe63d_c848_437e_92a0_ef4d4b86f794.slice/cri-containerd-342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870.scope WatchSource:0}: task 342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870 not found: not found Oct 2 19:30:59.538499 kubelet[2032]: E1002 19:30:59.538372 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:59.566679 kubelet[2032]: E1002 19:30:59.566621 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:00.567183 kubelet[2032]: E1002 19:31:00.567144 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:01.568217 kubelet[2032]: E1002 19:31:01.568147 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:02.569197 kubelet[2032]: E1002 19:31:02.569137 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:03.569850 kubelet[2032]: E1002 19:31:03.569788 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:04.333966 kubelet[2032]: E1002 19:31:04.333907 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:04.357680 env[1571]: time="2023-10-02T19:31:04.357602340Z" level=info msg="StopPodSandbox for \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\"" Oct 2 19:31:04.358228 env[1571]: time="2023-10-02T19:31:04.357811005Z" level=info msg="TearDown network for sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" successfully" Oct 2 19:31:04.358228 env[1571]: time="2023-10-02T19:31:04.357892111Z" level=info msg="StopPodSandbox for \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" returns successfully" Oct 2 19:31:04.358882 env[1571]: time="2023-10-02T19:31:04.358799606Z" level=info msg="RemovePodSandbox for \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\"" Oct 2 19:31:04.358988 env[1571]: time="2023-10-02T19:31:04.358878553Z" level=info msg="Forcibly stopping sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\"" Oct 2 19:31:04.359138 env[1571]: time="2023-10-02T19:31:04.359100645Z" level=info msg="TearDown network for sandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" successfully" Oct 2 19:31:04.365940 env[1571]: time="2023-10-02T19:31:04.365723011Z" level=info msg="RemovePodSandbox \"742596c337809b03904beba4389bd404d4b18c905940c35af2f6737ada6d189c\" returns successfully" Oct 2 19:31:04.539969 kubelet[2032]: E1002 19:31:04.539916 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:04.570681 kubelet[2032]: E1002 19:31:04.570625 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:05.571798 kubelet[2032]: E1002 19:31:05.571742 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:06.572270 kubelet[2032]: E1002 19:31:06.572211 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:07.573212 kubelet[2032]: E1002 19:31:07.573152 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:08.574014 kubelet[2032]: E1002 19:31:08.573971 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:09.541415 kubelet[2032]: E1002 19:31:09.541381 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:09.575268 kubelet[2032]: E1002 19:31:09.575238 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:10.575949 kubelet[2032]: E1002 19:31:10.575912 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:10.765228 kubelet[2032]: E1002 19:31:10.765191 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794)\"" pod="kube-system/cilium-zslsl" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 Oct 2 19:31:11.577570 kubelet[2032]: E1002 19:31:11.577508 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:12.578432 kubelet[2032]: E1002 19:31:12.578374 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:13.579546 kubelet[2032]: E1002 19:31:13.579508 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:14.543327 kubelet[2032]: E1002 19:31:14.543274 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:14.580820 kubelet[2032]: E1002 19:31:14.580794 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:15.581693 kubelet[2032]: E1002 19:31:15.581652 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:16.583246 kubelet[2032]: E1002 19:31:16.583210 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:17.583968 kubelet[2032]: E1002 19:31:17.583927 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:18.585140 kubelet[2032]: E1002 19:31:18.585102 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:19.544298 kubelet[2032]: E1002 19:31:19.544263 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:19.586474 kubelet[2032]: E1002 19:31:19.586418 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:20.586917 kubelet[2032]: E1002 19:31:20.586857 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:21.587295 kubelet[2032]: E1002 19:31:21.587254 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:21.768797 env[1571]: time="2023-10-02T19:31:21.768724976Z" level=info msg="CreateContainer within sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:31:21.786226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169855409.mount: Deactivated successfully. Oct 2 19:31:21.796584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268555337.mount: Deactivated successfully. Oct 2 19:31:21.798650 env[1571]: time="2023-10-02T19:31:21.798578866Z" level=info msg="CreateContainer within sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494\"" Oct 2 19:31:21.799785 env[1571]: time="2023-10-02T19:31:21.799731965Z" level=info msg="StartContainer for \"13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494\"" Oct 2 19:31:21.849296 systemd[1]: Started cri-containerd-13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494.scope. Oct 2 19:31:21.884553 systemd[1]: cri-containerd-13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494.scope: Deactivated successfully. Oct 2 19:31:21.904326 env[1571]: time="2023-10-02T19:31:21.904257480Z" level=info msg="shim disconnected" id=13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494 Oct 2 19:31:21.904660 env[1571]: time="2023-10-02T19:31:21.904626359Z" level=warning msg="cleaning up after shim disconnected" id=13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494 namespace=k8s.io Oct 2 19:31:21.904780 env[1571]: time="2023-10-02T19:31:21.904752358Z" level=info msg="cleaning up dead shim" Oct 2 19:31:21.932569 env[1571]: time="2023-10-02T19:31:21.932505404Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3154 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:31:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:31:21.933276 env[1571]: time="2023-10-02T19:31:21.933197274Z" level=error msg="copy shim log" error="read /proc/self/fd/49: file already closed" Oct 2 19:31:21.936184 env[1571]: time="2023-10-02T19:31:21.936121806Z" level=error msg="Failed to pipe stdout of container \"13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494\"" error="reading from a closed fifo" Oct 2 19:31:21.937376 env[1571]: time="2023-10-02T19:31:21.937302398Z" level=error msg="Failed to pipe stderr of container \"13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494\"" error="reading from a closed fifo" Oct 2 19:31:21.939410 env[1571]: time="2023-10-02T19:31:21.939336197Z" level=error msg="StartContainer for \"13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:31:21.940101 kubelet[2032]: E1002 19:31:21.939745 2032 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494" Oct 2 19:31:21.940101 kubelet[2032]: E1002 19:31:21.939909 2032 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:21.940101 kubelet[2032]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:21.940101 kubelet[2032]: rm /hostbin/cilium-mount Oct 2 19:31:21.940440 kubelet[2032]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g9cbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:31:21.940558 kubelet[2032]: E1002 19:31:21.939965 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zslsl" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 Oct 2 19:31:22.352929 kubelet[2032]: I1002 19:31:22.352191 2032 scope.go:115] "RemoveContainer" containerID="342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870" Oct 2 19:31:22.352929 kubelet[2032]: I1002 19:31:22.352697 2032 scope.go:115] "RemoveContainer" containerID="342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870" Oct 2 19:31:22.355461 env[1571]: time="2023-10-02T19:31:22.355391805Z" level=info msg="RemoveContainer for \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\"" Oct 2 19:31:22.357034 env[1571]: time="2023-10-02T19:31:22.356862520Z" level=info msg="RemoveContainer for \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\"" Oct 2 19:31:22.357357 env[1571]: time="2023-10-02T19:31:22.357254895Z" level=error msg="RemoveContainer for \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\" failed" error="failed to set removing state for container \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\": container is already in removing state" Oct 2 19:31:22.358507 kubelet[2032]: E1002 19:31:22.358475 2032 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\": container is already in removing state" containerID="342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870" Oct 2 19:31:22.359338 kubelet[2032]: E1002 19:31:22.359310 2032 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870": container is already in removing state; Skipping pod "cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794)" Oct 2 19:31:22.359878 kubelet[2032]: E1002 19:31:22.359856 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794)\"" pod="kube-system/cilium-zslsl" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 Oct 2 19:31:22.362814 env[1571]: time="2023-10-02T19:31:22.362759649Z" level=info msg="RemoveContainer for \"342b78e30d645dceec872c384d84018bcd312380718accdcf82cd0e4841de870\" returns successfully" Oct 2 19:31:22.588753 kubelet[2032]: E1002 19:31:22.588687 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:22.781043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494-rootfs.mount: Deactivated successfully. Oct 2 19:31:23.589516 kubelet[2032]: E1002 19:31:23.589454 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.333851 kubelet[2032]: E1002 19:31:24.333813 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.545356 kubelet[2032]: E1002 19:31:24.545321 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:24.590540 kubelet[2032]: E1002 19:31:24.590163 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:25.011070 kubelet[2032]: W1002 19:31:25.011004 2032 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod937fe63d_c848_437e_92a0_ef4d4b86f794.slice/cri-containerd-13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494.scope WatchSource:0}: task 13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494 not found: not found Oct 2 19:31:25.591810 kubelet[2032]: E1002 19:31:25.591713 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:26.592800 kubelet[2032]: E1002 19:31:26.592736 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:27.593508 kubelet[2032]: E1002 19:31:27.593458 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:28.594485 kubelet[2032]: E1002 19:31:28.594424 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:29.546934 kubelet[2032]: E1002 19:31:29.546897 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:29.594951 kubelet[2032]: E1002 19:31:29.594903 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:30.596450 kubelet[2032]: E1002 19:31:30.596400 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:31.597926 kubelet[2032]: E1002 19:31:31.597835 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:32.598947 kubelet[2032]: E1002 19:31:32.598850 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:33.422164 update_engine[1561]: I1002 19:31:33.421993 1561 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:31:33.422164 update_engine[1561]: I1002 19:31:33.422106 1561 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:31:33.423004 update_engine[1561]: I1002 19:31:33.422492 1561 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:31:33.423427 update_engine[1561]: I1002 19:31:33.423374 1561 omaha_request_params.cc:62] Current group set to lts Oct 2 19:31:33.423642 update_engine[1561]: I1002 19:31:33.423599 1561 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:31:33.423642 update_engine[1561]: I1002 19:31:33.423627 1561 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:31:33.423791 update_engine[1561]: I1002 19:31:33.423661 1561 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:31:33.423791 update_engine[1561]: I1002 19:31:33.423719 1561 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:31:33.424978 update_engine[1561]: I1002 19:31:33.424908 1561 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:31:33.424978 update_engine[1561]: I1002 19:31:33.424955 1561 omaha_request_action.cc:269] Request: Oct 2 19:31:33.424978 update_engine[1561]: Oct 2 19:31:33.424978 update_engine[1561]: Oct 2 19:31:33.424978 update_engine[1561]: Oct 2 19:31:33.424978 update_engine[1561]: Oct 2 19:31:33.424978 update_engine[1561]: Oct 2 19:31:33.424978 update_engine[1561]: Oct 2 19:31:33.424978 update_engine[1561]: Oct 2 19:31:33.424978 update_engine[1561]: Oct 2 19:31:33.424978 update_engine[1561]: I1002 19:31:33.424978 1561 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:31:33.426718 locksmithd[1616]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:31:33.428949 update_engine[1561]: I1002 19:31:33.428872 1561 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:31:33.429250 update_engine[1561]: I1002 19:31:33.429203 1561 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:31:33.599738 kubelet[2032]: E1002 19:31:33.599668 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:34.548797 kubelet[2032]: E1002 19:31:34.548762 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:34.600797 kubelet[2032]: E1002 19:31:34.600752 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:34.644220 update_engine[1561]: I1002 19:31:34.644145 1561 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:31:34.644741 update_engine[1561]: I1002 19:31:34.644549 1561 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:31:34.644924 update_engine[1561]: I1002 19:31:34.644876 1561 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:31:34.969729 update_engine[1561]: I1002 19:31:34.969656 1561 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:31:34.971953 update_engine[1561]: I1002 19:31:34.971877 1561 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:31:34.971953 update_engine[1561]: I1002 19:31:34.971930 1561 omaha_request_action.cc:619] Omaha request response: Oct 2 19:31:34.971953 update_engine[1561]: Oct 2 19:31:34.980445 update_engine[1561]: I1002 19:31:34.980371 1561 omaha_request_action.cc:409] No update. Oct 2 19:31:34.980445 update_engine[1561]: I1002 19:31:34.980433 1561 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:31:34.980445 update_engine[1561]: I1002 19:31:34.980448 1561 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:31:34.980729 update_engine[1561]: I1002 19:31:34.980458 1561 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:31:34.980729 update_engine[1561]: I1002 19:31:34.980468 1561 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:31:34.980729 update_engine[1561]: I1002 19:31:34.980476 1561 update_attempter.cc:302] Processing Done. Oct 2 19:31:34.980729 update_engine[1561]: I1002 19:31:34.980499 1561 update_attempter.cc:338] No update. Oct 2 19:31:34.980729 update_engine[1561]: I1002 19:31:34.980516 1561 update_check_scheduler.cc:74] Next update check in 46m39s Oct 2 19:31:34.981132 locksmithd[1616]: LastCheckedTime=1696275094 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:31:35.602200 kubelet[2032]: E1002 19:31:35.602126 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:36.602585 kubelet[2032]: E1002 19:31:36.602538 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:37.604079 kubelet[2032]: E1002 19:31:37.604005 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:37.765491 kubelet[2032]: E1002 19:31:37.765450 2032 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-zslsl_kube-system(937fe63d-c848-437e-92a0-ef4d4b86f794)\"" pod="kube-system/cilium-zslsl" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 Oct 2 19:31:38.604847 kubelet[2032]: E1002 19:31:38.604772 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:39.550196 kubelet[2032]: E1002 19:31:39.550155 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:39.605930 kubelet[2032]: E1002 19:31:39.605860 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:39.781879 env[1571]: time="2023-10-02T19:31:39.781800503Z" level=info msg="StopPodSandbox for \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\"" Oct 2 19:31:39.785800 env[1571]: time="2023-10-02T19:31:39.781928292Z" level=info msg="Container to stop \"13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:31:39.784594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27-shm.mount: Deactivated successfully. Oct 2 19:31:39.797145 env[1571]: time="2023-10-02T19:31:39.797001644Z" level=info msg="StopContainer for \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\" with timeout 30 (s)" Oct 2 19:31:39.797955 env[1571]: time="2023-10-02T19:31:39.797848574Z" level=info msg="Stop container \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\" with signal terminated" Oct 2 19:31:39.806074 systemd[1]: cri-containerd-890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27.scope: Deactivated successfully. Oct 2 19:31:39.805000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:31:39.810389 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:31:39.810550 kernel: audit: type=1334 audit(1696275099.805:777): prog-id=85 op=UNLOAD Oct 2 19:31:39.815000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:31:39.820108 kernel: audit: type=1334 audit(1696275099.815:778): prog-id=88 op=UNLOAD Oct 2 19:31:39.847403 systemd[1]: cri-containerd-ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8.scope: Deactivated successfully. Oct 2 19:31:39.846000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:31:39.852307 kernel: audit: type=1334 audit(1696275099.846:779): prog-id=89 op=UNLOAD Oct 2 19:31:39.852000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:31:39.857118 kernel: audit: type=1334 audit(1696275099.852:780): prog-id=92 op=UNLOAD Oct 2 19:31:39.879934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27-rootfs.mount: Deactivated successfully. Oct 2 19:31:39.897379 env[1571]: time="2023-10-02T19:31:39.897307357Z" level=info msg="shim disconnected" id=890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27 Oct 2 19:31:39.898695 env[1571]: time="2023-10-02T19:31:39.898626479Z" level=warning msg="cleaning up after shim disconnected" id=890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27 namespace=k8s.io Oct 2 19:31:39.898957 env[1571]: time="2023-10-02T19:31:39.898917757Z" level=info msg="cleaning up dead shim" Oct 2 19:31:39.918577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8-rootfs.mount: Deactivated successfully. Oct 2 19:31:39.927171 env[1571]: time="2023-10-02T19:31:39.927100142Z" level=info msg="shim disconnected" id=ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8 Oct 2 19:31:39.927754 env[1571]: time="2023-10-02T19:31:39.927701863Z" level=warning msg="cleaning up after shim disconnected" id=ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8 namespace=k8s.io Oct 2 19:31:39.927939 env[1571]: time="2023-10-02T19:31:39.927908517Z" level=info msg="cleaning up dead shim" Oct 2 19:31:39.947815 env[1571]: time="2023-10-02T19:31:39.947750214Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3205 runtime=io.containerd.runc.v2\n" Oct 2 19:31:39.948690 env[1571]: time="2023-10-02T19:31:39.948630444Z" level=info msg="TearDown network for sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" successfully" Oct 2 19:31:39.948920 env[1571]: time="2023-10-02T19:31:39.948883862Z" level=info msg="StopPodSandbox for \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" returns successfully" Oct 2 19:31:39.966949 env[1571]: time="2023-10-02T19:31:39.966891657Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3213 runtime=io.containerd.runc.v2\n" Oct 2 19:31:39.970288 env[1571]: time="2023-10-02T19:31:39.970038309Z" level=info msg="StopContainer for \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\" returns successfully" Oct 2 19:31:39.971297 env[1571]: time="2023-10-02T19:31:39.971239591Z" level=info msg="StopPodSandbox for \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\"" Oct 2 19:31:39.971614 env[1571]: time="2023-10-02T19:31:39.971550345Z" level=info msg="Container to stop \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:31:39.974279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa-shm.mount: Deactivated successfully. Oct 2 19:31:39.995000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:31:39.996031 systemd[1]: cri-containerd-594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa.scope: Deactivated successfully. Oct 2 19:31:40.001103 kernel: audit: type=1334 audit(1696275099.995:781): prog-id=81 op=UNLOAD Oct 2 19:31:40.006086 kernel: audit: type=1334 audit(1696275100.002:782): prog-id=84 op=UNLOAD Oct 2 19:31:40.002000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:31:40.030533 kubelet[2032]: I1002 19:31:40.030463 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/937fe63d-c848-437e-92a0-ef4d4b86f794-hubble-tls\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.030778 kubelet[2032]: I1002 19:31:40.030546 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cni-path\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.030778 kubelet[2032]: I1002 19:31:40.030598 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-ipsec-secrets\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.030778 kubelet[2032]: I1002 19:31:40.030644 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/937fe63d-c848-437e-92a0-ef4d4b86f794-clustermesh-secrets\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.030778 kubelet[2032]: I1002 19:31:40.030686 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-hostproc\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.030778 kubelet[2032]: I1002 19:31:40.030738 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-xtables-lock\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.031142 kubelet[2032]: I1002 19:31:40.030784 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-lib-modules\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.031142 kubelet[2032]: I1002 19:31:40.030830 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-config-path\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.031142 kubelet[2032]: I1002 19:31:40.030871 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-host-proc-sys-net\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.031142 kubelet[2032]: I1002 19:31:40.030915 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9cbv\" (UniqueName: \"kubernetes.io/projected/937fe63d-c848-437e-92a0-ef4d4b86f794-kube-api-access-g9cbv\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.031142 kubelet[2032]: I1002 19:31:40.030958 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-cgroup\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.031142 kubelet[2032]: I1002 19:31:40.031002 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-host-proc-sys-kernel\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.031513 kubelet[2032]: I1002 19:31:40.031041 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-bpf-maps\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.031513 kubelet[2032]: I1002 19:31:40.031150 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-etc-cni-netd\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.031513 kubelet[2032]: I1002 19:31:40.031217 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-run\") pod \"937fe63d-c848-437e-92a0-ef4d4b86f794\" (UID: \"937fe63d-c848-437e-92a0-ef4d4b86f794\") " Oct 2 19:31:40.031513 kubelet[2032]: I1002 19:31:40.031312 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.031513 kubelet[2032]: I1002 19:31:40.031388 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.031818 kubelet[2032]: I1002 19:31:40.031456 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.032232 kubelet[2032]: I1002 19:31:40.032183 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.032603 kubelet[2032]: W1002 19:31:40.032532 2032 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/937fe63d-c848-437e-92a0-ef4d4b86f794/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:31:40.033872 kubelet[2032]: I1002 19:31:40.033765 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.034383 kubelet[2032]: I1002 19:31:40.034323 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.034518 kubelet[2032]: I1002 19:31:40.034401 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.034518 kubelet[2032]: I1002 19:31:40.034446 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.034868 kubelet[2032]: I1002 19:31:40.034815 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cni-path" (OuterVolumeSpecName: "cni-path") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.035267 kubelet[2032]: I1002 19:31:40.035190 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-hostproc" (OuterVolumeSpecName: "hostproc") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.040744 kubelet[2032]: I1002 19:31:40.040683 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:31:40.061251 systemd[1]: var-lib-kubelet-pods-937fe63d\x2dc848\x2d437e\x2d92a0\x2def4d4b86f794-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:31:40.070599 kubelet[2032]: I1002 19:31:40.070426 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937fe63d-c848-437e-92a0-ef4d4b86f794-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:31:40.072296 kubelet[2032]: I1002 19:31:40.071896 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/937fe63d-c848-437e-92a0-ef4d4b86f794-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:31:40.080782 kubelet[2032]: I1002 19:31:40.080519 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937fe63d-c848-437e-92a0-ef4d4b86f794-kube-api-access-g9cbv" (OuterVolumeSpecName: "kube-api-access-g9cbv") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "kube-api-access-g9cbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:31:40.084641 env[1571]: time="2023-10-02T19:31:40.084500950Z" level=info msg="shim disconnected" id=594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa Oct 2 19:31:40.084641 env[1571]: time="2023-10-02T19:31:40.084581903Z" level=warning msg="cleaning up after shim disconnected" id=594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa namespace=k8s.io Oct 2 19:31:40.084641 env[1571]: time="2023-10-02T19:31:40.084628067Z" level=info msg="cleaning up dead shim" Oct 2 19:31:40.085329 kubelet[2032]: I1002 19:31:40.085248 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "937fe63d-c848-437e-92a0-ef4d4b86f794" (UID: "937fe63d-c848-437e-92a0-ef4d4b86f794"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:31:40.112395 env[1571]: time="2023-10-02T19:31:40.112324704Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3252 runtime=io.containerd.runc.v2\n" Oct 2 19:31:40.112955 env[1571]: time="2023-10-02T19:31:40.112895141Z" level=info msg="TearDown network for sandbox \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\" successfully" Oct 2 19:31:40.113141 env[1571]: time="2023-10-02T19:31:40.112952393Z" level=info msg="StopPodSandbox for \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\" returns successfully" Oct 2 19:31:40.132006 kubelet[2032]: I1002 19:31:40.131961 2032 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-xtables-lock\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.132278 kubelet[2032]: I1002 19:31:40.132252 2032 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-lib-modules\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.132418 kubelet[2032]: I1002 19:31:40.132397 2032 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-config-path\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.132550 kubelet[2032]: I1002 19:31:40.132529 2032 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-host-proc-sys-net\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.132670 kubelet[2032]: I1002 19:31:40.132649 2032 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-hostproc\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.132811 kubelet[2032]: I1002 19:31:40.132789 2032 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-cgroup\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.132943 kubelet[2032]: I1002 19:31:40.132923 2032 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-g9cbv\" (UniqueName: \"kubernetes.io/projected/937fe63d-c848-437e-92a0-ef4d4b86f794-kube-api-access-g9cbv\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.133115 kubelet[2032]: I1002 19:31:40.133094 2032 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-host-proc-sys-kernel\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.133271 kubelet[2032]: I1002 19:31:40.133250 2032 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-bpf-maps\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.133403 kubelet[2032]: I1002 19:31:40.133382 2032 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-etc-cni-netd\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.133535 kubelet[2032]: I1002 19:31:40.133514 2032 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-run\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.133672 kubelet[2032]: I1002 19:31:40.133652 2032 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/937fe63d-c848-437e-92a0-ef4d4b86f794-cilium-ipsec-secrets\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.133804 kubelet[2032]: I1002 19:31:40.133782 2032 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/937fe63d-c848-437e-92a0-ef4d4b86f794-clustermesh-secrets\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.133935 kubelet[2032]: I1002 19:31:40.133915 2032 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/937fe63d-c848-437e-92a0-ef4d4b86f794-hubble-tls\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.134109 kubelet[2032]: I1002 19:31:40.134038 2032 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/937fe63d-c848-437e-92a0-ef4d4b86f794-cni-path\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.234867 kubelet[2032]: I1002 19:31:40.234822 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a29ac4f-ea0a-4529-b8b6-5994708c1610-cilium-config-path\") pod \"8a29ac4f-ea0a-4529-b8b6-5994708c1610\" (UID: \"8a29ac4f-ea0a-4529-b8b6-5994708c1610\") " Oct 2 19:31:40.235278 kubelet[2032]: I1002 19:31:40.235225 2032 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lgcn\" (UniqueName: \"kubernetes.io/projected/8a29ac4f-ea0a-4529-b8b6-5994708c1610-kube-api-access-2lgcn\") pod \"8a29ac4f-ea0a-4529-b8b6-5994708c1610\" (UID: \"8a29ac4f-ea0a-4529-b8b6-5994708c1610\") " Oct 2 19:31:40.235515 kubelet[2032]: W1002 19:31:40.235469 2032 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/8a29ac4f-ea0a-4529-b8b6-5994708c1610/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:31:40.240823 kubelet[2032]: I1002 19:31:40.240766 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a29ac4f-ea0a-4529-b8b6-5994708c1610-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a29ac4f-ea0a-4529-b8b6-5994708c1610" (UID: "8a29ac4f-ea0a-4529-b8b6-5994708c1610"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:31:40.245720 kubelet[2032]: I1002 19:31:40.245641 2032 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a29ac4f-ea0a-4529-b8b6-5994708c1610-kube-api-access-2lgcn" (OuterVolumeSpecName: "kube-api-access-2lgcn") pod "8a29ac4f-ea0a-4529-b8b6-5994708c1610" (UID: "8a29ac4f-ea0a-4529-b8b6-5994708c1610"). InnerVolumeSpecName "kube-api-access-2lgcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:31:40.337136 kubelet[2032]: I1002 19:31:40.336005 2032 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-2lgcn\" (UniqueName: \"kubernetes.io/projected/8a29ac4f-ea0a-4529-b8b6-5994708c1610-kube-api-access-2lgcn\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.337136 kubelet[2032]: I1002 19:31:40.336092 2032 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a29ac4f-ea0a-4529-b8b6-5994708c1610-cilium-config-path\") on node \"172.31.26.69\" DevicePath \"\"" Oct 2 19:31:40.394643 kubelet[2032]: I1002 19:31:40.394603 2032 scope.go:115] "RemoveContainer" containerID="13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494" Oct 2 19:31:40.398879 env[1571]: time="2023-10-02T19:31:40.398156494Z" level=info msg="RemoveContainer for \"13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494\"" Oct 2 19:31:40.403047 env[1571]: time="2023-10-02T19:31:40.402882865Z" level=info msg="RemoveContainer for \"13b0fef6bc191c24feeaf2f6382a0b8fb355a5f1782b3d994d32bdc51a81f494\" returns successfully" Oct 2 19:31:40.405446 kubelet[2032]: I1002 19:31:40.405412 2032 scope.go:115] "RemoveContainer" containerID="ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8" Oct 2 19:31:40.408304 env[1571]: time="2023-10-02T19:31:40.408234645Z" level=info msg="RemoveContainer for \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\"" Oct 2 19:31:40.413737 systemd[1]: Removed slice kubepods-burstable-pod937fe63d_c848_437e_92a0_ef4d4b86f794.slice. Oct 2 19:31:40.415530 env[1571]: time="2023-10-02T19:31:40.415445156Z" level=info msg="RemoveContainer for \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\" returns successfully" Oct 2 19:31:40.416892 kubelet[2032]: I1002 19:31:40.416840 2032 scope.go:115] "RemoveContainer" containerID="ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8" Oct 2 19:31:40.417572 env[1571]: time="2023-10-02T19:31:40.417394501Z" level=error msg="ContainerStatus for \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\": not found" Oct 2 19:31:40.417922 kubelet[2032]: E1002 19:31:40.417880 2032 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\": not found" containerID="ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8" Oct 2 19:31:40.418108 kubelet[2032]: I1002 19:31:40.417946 2032 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8} err="failed to get container status \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"ceeb8806c435691aad2bc3e4a75d5ff2f3f4fc04c35ae192a8dbdf00340c09c8\": not found" Oct 2 19:31:40.424783 systemd[1]: Removed slice kubepods-besteffort-pod8a29ac4f_ea0a_4529_b8b6_5994708c1610.slice. Oct 2 19:31:40.606916 kubelet[2032]: E1002 19:31:40.606751 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:40.769917 kubelet[2032]: I1002 19:31:40.769884 2032 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8a29ac4f-ea0a-4529-b8b6-5994708c1610 path="/var/lib/kubelet/pods/8a29ac4f-ea0a-4529-b8b6-5994708c1610/volumes" Oct 2 19:31:40.771202 kubelet[2032]: I1002 19:31:40.771167 2032 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=937fe63d-c848-437e-92a0-ef4d4b86f794 path="/var/lib/kubelet/pods/937fe63d-c848-437e-92a0-ef4d4b86f794/volumes" Oct 2 19:31:40.784493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa-rootfs.mount: Deactivated successfully. Oct 2 19:31:40.784670 systemd[1]: var-lib-kubelet-pods-937fe63d\x2dc848\x2d437e\x2d92a0\x2def4d4b86f794-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg9cbv.mount: Deactivated successfully. Oct 2 19:31:40.784817 systemd[1]: var-lib-kubelet-pods-8a29ac4f\x2dea0a\x2d4529\x2db8b6\x2d5994708c1610-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2lgcn.mount: Deactivated successfully. Oct 2 19:31:40.784969 systemd[1]: var-lib-kubelet-pods-937fe63d\x2dc848\x2d437e\x2d92a0\x2def4d4b86f794-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:31:40.785135 systemd[1]: var-lib-kubelet-pods-937fe63d\x2dc848\x2d437e\x2d92a0\x2def4d4b86f794-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:31:41.607309 kubelet[2032]: E1002 19:31:41.607243 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:42.607999 kubelet[2032]: E1002 19:31:42.607920 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:43.609588 kubelet[2032]: E1002 19:31:43.609522 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.333552 kubelet[2032]: E1002 19:31:44.333480 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.551825 kubelet[2032]: E1002 19:31:44.551791 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:44.610564 kubelet[2032]: E1002 19:31:44.610403 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:45.611029 kubelet[2032]: E1002 19:31:45.610983 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:46.612144 kubelet[2032]: E1002 19:31:46.612042 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:47.612942 kubelet[2032]: E1002 19:31:47.612863 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:48.613129 kubelet[2032]: E1002 19:31:48.613080 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:49.553162 kubelet[2032]: E1002 19:31:49.553100 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:49.614068 kubelet[2032]: E1002 19:31:49.613983 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:49.965444 amazon-ssm-agent[1548]: 2023-10-02 19:31:49 INFO Backing off health check to every 600 seconds for 1800 seconds. Oct 2 19:31:50.065898 amazon-ssm-agent[1548]: 2023-10-02 19:31:49 ERROR Health ping failed with error - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-01e1930281a688ee2 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-01e1930281a688ee2 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:31:50.065898 amazon-ssm-agent[1548]: status code: 400, request id: 8cb94bd4-189b-4c3e-8c34-fb1ab9bacb0c Oct 2 19:31:50.614912 kubelet[2032]: E1002 19:31:50.614866 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:51.616664 kubelet[2032]: E1002 19:31:51.616592 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:52.616966 kubelet[2032]: E1002 19:31:52.616919 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:53.617817 kubelet[2032]: E1002 19:31:53.617740 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:54.554028 kubelet[2032]: E1002 19:31:54.553979 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:54.618599 kubelet[2032]: E1002 19:31:54.618553 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:55.619877 kubelet[2032]: E1002 19:31:55.619811 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:56.620891 kubelet[2032]: E1002 19:31:56.620842 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:57.622327 kubelet[2032]: E1002 19:31:57.622249 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:58.623935 kubelet[2032]: E1002 19:31:58.623864 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:59.555265 kubelet[2032]: E1002 19:31:59.555218 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:59.624159 kubelet[2032]: E1002 19:31:59.624093 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:00.624894 kubelet[2032]: E1002 19:32:00.624822 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:01.625585 kubelet[2032]: E1002 19:32:01.625526 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:01.907161 kubelet[2032]: E1002 19:32:01.906301 2032 controller.go:189] failed to update lease, error: Put "https://172.31.16.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.69?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:32:02.625884 kubelet[2032]: E1002 19:32:02.625837 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:03.627350 kubelet[2032]: E1002 19:32:03.627305 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:04.333310 kubelet[2032]: E1002 19:32:04.333226 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:04.369339 env[1571]: time="2023-10-02T19:32:04.369265087Z" level=info msg="StopPodSandbox for \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\"" Oct 2 19:32:04.369893 env[1571]: time="2023-10-02T19:32:04.369402298Z" level=info msg="TearDown network for sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" successfully" Oct 2 19:32:04.369893 env[1571]: time="2023-10-02T19:32:04.369460319Z" level=info msg="StopPodSandbox for \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" returns successfully" Oct 2 19:32:04.370217 env[1571]: time="2023-10-02T19:32:04.370166125Z" level=info msg="RemovePodSandbox for \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\"" Oct 2 19:32:04.370334 env[1571]: time="2023-10-02T19:32:04.370220810Z" level=info msg="Forcibly stopping sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\"" Oct 2 19:32:04.370475 env[1571]: time="2023-10-02T19:32:04.370429770Z" level=info msg="TearDown network for sandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" successfully" Oct 2 19:32:04.374529 env[1571]: time="2023-10-02T19:32:04.374417003Z" level=info msg="RemovePodSandbox \"890e03d93aeaa2b92b85a3968f59e7b77278ea6b9a4c340c9ce359be2e0a4e27\" returns successfully" Oct 2 19:32:04.375288 env[1571]: time="2023-10-02T19:32:04.375242450Z" level=info msg="StopPodSandbox for \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\"" Oct 2 19:32:04.375687 env[1571]: time="2023-10-02T19:32:04.375594837Z" level=info msg="TearDown network for sandbox \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\" successfully" Oct 2 19:32:04.375865 env[1571]: time="2023-10-02T19:32:04.375831134Z" level=info msg="StopPodSandbox for \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\" returns successfully" Oct 2 19:32:04.376555 env[1571]: time="2023-10-02T19:32:04.376503807Z" level=info msg="RemovePodSandbox for \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\"" Oct 2 19:32:04.376690 env[1571]: time="2023-10-02T19:32:04.376556104Z" level=info msg="Forcibly stopping sandbox \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\"" Oct 2 19:32:04.376690 env[1571]: time="2023-10-02T19:32:04.376675614Z" level=info msg="TearDown network for sandbox \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\" successfully" Oct 2 19:32:04.383807 env[1571]: time="2023-10-02T19:32:04.383730274Z" level=info msg="RemovePodSandbox \"594106e78aa82bd894aabaee72ec54248f1c14ebafd8dd85e6a91e369a690faa\" returns successfully" Oct 2 19:32:04.416224 kubelet[2032]: W1002 19:32:04.416173 2032 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:32:04.556443 kubelet[2032]: E1002 19:32:04.556406 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:04.628764 kubelet[2032]: E1002 19:32:04.628696 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:05.629563 kubelet[2032]: E1002 19:32:05.629499 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:06.630639 kubelet[2032]: E1002 19:32:06.630575 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:07.631187 kubelet[2032]: E1002 19:32:07.631142 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:08.632535 kubelet[2032]: E1002 19:32:08.632465 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:09.558440 kubelet[2032]: E1002 19:32:09.558386 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:09.633376 kubelet[2032]: E1002 19:32:09.633313 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:10.634499 kubelet[2032]: E1002 19:32:10.634435 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:11.635622 kubelet[2032]: E1002 19:32:11.635552 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:11.907866 kubelet[2032]: E1002 19:32:11.907413 2032 controller.go:189] failed to update lease, error: Put "https://172.31.16.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.69?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:32:12.636389 kubelet[2032]: E1002 19:32:12.636318 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:13.636634 kubelet[2032]: E1002 19:32:13.636560 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.559245 kubelet[2032]: E1002 19:32:14.559214 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:14.637618 kubelet[2032]: E1002 19:32:14.637575 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:15.639041 kubelet[2032]: E1002 19:32:15.638964 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:15.927543 kubelet[2032]: E1002 19:32:15.926874 2032 controller.go:189] failed to update lease, error: Put "https://172.31.16.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.69?timeout=10s": unexpected EOF Oct 2 19:32:15.928478 kubelet[2032]: E1002 19:32:15.928410 2032 controller.go:189] failed to update lease, error: Put "https://172.31.16.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.69?timeout=10s": dial tcp 172.31.16.176:6443: connect: connection refused Oct 2 19:32:15.929209 kubelet[2032]: E1002 19:32:15.929134 2032 controller.go:189] failed to update lease, error: Put "https://172.31.16.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.69?timeout=10s": dial tcp 172.31.16.176:6443: connect: connection refused Oct 2 19:32:15.929528 kubelet[2032]: I1002 19:32:15.929477 2032 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Oct 2 19:32:15.930457 kubelet[2032]: E1002 19:32:15.930389 2032 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.16.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.69?timeout=10s": dial tcp 172.31.16.176:6443: connect: connection refused Oct 2 19:32:16.131573 kubelet[2032]: E1002 19:32:16.131503 2032 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.16.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.69?timeout=10s": dial tcp 172.31.16.176:6443: connect: connection refused Oct 2 19:32:16.533296 kubelet[2032]: E1002 19:32:16.533241 2032 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.16.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.69?timeout=10s": dial tcp 172.31.16.176:6443: connect: connection refused Oct 2 19:32:16.639228 kubelet[2032]: E1002 19:32:16.639156 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:17.640332 kubelet[2032]: E1002 19:32:17.640284 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:18.641216 kubelet[2032]: E1002 19:32:18.641137 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:19.560434 kubelet[2032]: E1002 19:32:19.560402 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:19.641689 kubelet[2032]: E1002 19:32:19.641617 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:20.642589 kubelet[2032]: E1002 19:32:20.642546 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:21.643295 kubelet[2032]: E1002 19:32:21.643251 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:22.644709 kubelet[2032]: E1002 19:32:22.644639 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:23.644977 kubelet[2032]: E1002 19:32:23.644929 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:24.334202 kubelet[2032]: E1002 19:32:24.334135 2032 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:24.561424 kubelet[2032]: E1002 19:32:24.561346 2032 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:24.646248 kubelet[2032]: E1002 19:32:24.646206 2032 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"