Oct 2 19:14:42.225716 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 2 19:14:42.225757 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:14:42.225781 kernel: efi: EFI v2.70 by EDK II Oct 2 19:14:42.225796 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71accf98 Oct 2 19:14:42.225810 kernel: ACPI: Early table checksum verification disabled Oct 2 19:14:42.225824 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 2 19:14:42.225840 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 2 19:14:42.225855 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 19:14:42.225869 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 2 19:14:42.225883 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 19:14:42.225903 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 2 19:14:42.225917 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 2 19:14:42.225931 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 2 19:14:42.225945 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 19:14:42.225961 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 2 19:14:42.225980 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 2 19:14:42.225995 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 2 19:14:42.226040 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 2 19:14:42.226059 kernel: printk: bootconsole [uart0] enabled Oct 2 19:14:42.226074 kernel: NUMA: Failed to initialise from firmware Oct 2 19:14:42.226089 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:14:42.226105 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Oct 2 19:14:42.226120 kernel: Zone ranges: Oct 2 19:14:42.226135 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 2 19:14:42.226149 kernel: DMA32 empty Oct 2 19:14:42.226163 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 2 19:14:42.226184 kernel: Movable zone start for each node Oct 2 19:14:42.226200 kernel: Early memory node ranges Oct 2 19:14:42.226214 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Oct 2 19:14:42.226229 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 2 19:14:42.226243 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 2 19:14:42.226257 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 2 19:14:42.226272 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 2 19:14:42.226286 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 2 19:14:42.226301 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:14:42.226315 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 2 19:14:42.226330 kernel: psci: probing for conduit method from ACPI. Oct 2 19:14:42.226344 kernel: psci: PSCIv1.0 detected in firmware. Oct 2 19:14:42.226364 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:14:42.226379 kernel: psci: Trusted OS migration not required Oct 2 19:14:42.226400 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:14:42.226416 kernel: ACPI: SRAT not present Oct 2 19:14:42.226432 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:14:42.226452 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:14:42.226468 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 2 19:14:42.226483 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:14:42.226498 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:14:42.226513 kernel: CPU features: detected: Spectre-v2 Oct 2 19:14:42.226529 kernel: CPU features: detected: Spectre-v3a Oct 2 19:14:42.226544 kernel: CPU features: detected: Spectre-BHB Oct 2 19:14:42.226560 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:14:42.226575 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:14:42.226590 kernel: CPU features: detected: ARM erratum 1742098 Oct 2 19:14:42.226606 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 2 19:14:42.226626 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 2 19:14:42.226642 kernel: Policy zone: Normal Oct 2 19:14:42.226660 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:14:42.226677 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:14:42.226692 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:14:42.226708 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:14:42.226724 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:14:42.226739 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 2 19:14:42.226755 kernel: Memory: 3826444K/4030464K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 204020K reserved, 0K cma-reserved) Oct 2 19:14:42.226771 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:14:42.226791 kernel: trace event string verifier disabled Oct 2 19:14:42.226807 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:14:42.226846 kernel: rcu: RCU event tracing is enabled. Oct 2 19:14:42.226866 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:14:42.226883 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:14:42.226898 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:14:42.226914 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:14:42.226929 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:14:42.226944 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:14:42.226960 kernel: GICv3: 96 SPIs implemented Oct 2 19:14:42.226975 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:14:42.226990 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:14:42.227042 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:14:42.227059 kernel: GICv3: 16 PPIs implemented Oct 2 19:14:42.227074 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 2 19:14:42.227090 kernel: ACPI: SRAT not present Oct 2 19:14:42.227105 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 2 19:14:42.227120 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:14:42.227136 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:14:42.227152 kernel: GICv3: using LPI property table @0x00000004000c0000 Oct 2 19:14:42.227167 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 2 19:14:42.227182 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Oct 2 19:14:42.227198 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 2 19:14:42.227219 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 2 19:14:42.227235 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 2 19:14:42.227251 kernel: Console: colour dummy device 80x25 Oct 2 19:14:42.227267 kernel: printk: console [tty1] enabled Oct 2 19:14:42.227283 kernel: ACPI: Core revision 20210730 Oct 2 19:14:42.227298 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 2 19:14:42.227314 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:14:42.227330 kernel: LSM: Security Framework initializing Oct 2 19:14:42.227345 kernel: SELinux: Initializing. Oct 2 19:14:42.227361 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:14:42.227382 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:14:42.227397 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:14:42.227413 kernel: Platform MSI: ITS@0x10080000 domain created Oct 2 19:14:42.227428 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 2 19:14:42.227444 kernel: Remapping and enabling EFI services. Oct 2 19:14:42.227459 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:14:42.227475 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:14:42.227491 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 2 19:14:42.227506 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Oct 2 19:14:42.227526 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 2 19:14:42.227542 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:14:42.227557 kernel: SMP: Total of 2 processors activated. Oct 2 19:14:42.227573 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:14:42.227589 kernel: CPU features: detected: 32-bit EL1 Support Oct 2 19:14:42.227604 kernel: CPU features: detected: CRC32 instructions Oct 2 19:14:42.227620 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:14:42.227635 kernel: alternatives: patching kernel code Oct 2 19:14:42.227651 kernel: devtmpfs: initialized Oct 2 19:14:42.227671 kernel: KASLR disabled due to lack of seed Oct 2 19:14:42.227688 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:14:42.227704 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:14:42.227730 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:14:42.227750 kernel: SMBIOS 3.0.0 present. Oct 2 19:14:42.227766 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 2 19:14:42.227782 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:14:42.227799 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:14:42.227815 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:14:42.227831 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:14:42.227847 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:14:42.227863 kernel: audit: type=2000 audit(0.252:1): state=initialized audit_enabled=0 res=1 Oct 2 19:14:42.227883 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:14:42.227899 kernel: cpuidle: using governor menu Oct 2 19:14:42.227916 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:14:42.227932 kernel: ASID allocator initialised with 32768 entries Oct 2 19:14:42.227948 kernel: ACPI: bus type PCI registered Oct 2 19:14:42.227969 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:14:42.227985 kernel: Serial: AMBA PL011 UART driver Oct 2 19:14:42.228025 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:14:42.228052 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:14:42.228068 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:14:42.228085 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:14:42.228101 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:14:42.228117 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:14:42.228133 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:14:42.228156 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:14:42.228172 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:14:42.228188 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:14:42.228204 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:14:42.228220 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:14:42.228236 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:14:42.228252 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:14:42.228268 kernel: ACPI: Interpreter enabled Oct 2 19:14:42.228284 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:14:42.228305 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:14:42.228321 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 2 19:14:42.228744 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:14:42.240263 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:14:42.240491 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:14:42.240689 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 2 19:14:42.240886 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 2 19:14:42.240921 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 2 19:14:42.240939 kernel: acpiphp: Slot [1] registered Oct 2 19:14:42.240956 kernel: acpiphp: Slot [2] registered Oct 2 19:14:42.240972 kernel: acpiphp: Slot [3] registered Oct 2 19:14:42.240988 kernel: acpiphp: Slot [4] registered Oct 2 19:14:42.241030 kernel: acpiphp: Slot [5] registered Oct 2 19:14:42.242139 kernel: acpiphp: Slot [6] registered Oct 2 19:14:42.242157 kernel: acpiphp: Slot [7] registered Oct 2 19:14:42.242173 kernel: acpiphp: Slot [8] registered Oct 2 19:14:42.242197 kernel: acpiphp: Slot [9] registered Oct 2 19:14:42.242213 kernel: acpiphp: Slot [10] registered Oct 2 19:14:42.242230 kernel: acpiphp: Slot [11] registered Oct 2 19:14:42.242245 kernel: acpiphp: Slot [12] registered Oct 2 19:14:42.242261 kernel: acpiphp: Slot [13] registered Oct 2 19:14:42.242277 kernel: acpiphp: Slot [14] registered Oct 2 19:14:42.242293 kernel: acpiphp: Slot [15] registered Oct 2 19:14:42.242310 kernel: acpiphp: Slot [16] registered Oct 2 19:14:42.242325 kernel: acpiphp: Slot [17] registered Oct 2 19:14:42.242341 kernel: acpiphp: Slot [18] registered Oct 2 19:14:42.242362 kernel: acpiphp: Slot [19] registered Oct 2 19:14:42.242378 kernel: acpiphp: Slot [20] registered Oct 2 19:14:42.242394 kernel: acpiphp: Slot [21] registered Oct 2 19:14:42.242410 kernel: acpiphp: Slot [22] registered Oct 2 19:14:42.242426 kernel: acpiphp: Slot [23] registered Oct 2 19:14:42.242442 kernel: acpiphp: Slot [24] registered Oct 2 19:14:42.242458 kernel: acpiphp: Slot [25] registered Oct 2 19:14:42.242474 kernel: acpiphp: Slot [26] registered Oct 2 19:14:42.242490 kernel: acpiphp: Slot [27] registered Oct 2 19:14:42.242510 kernel: acpiphp: Slot [28] registered Oct 2 19:14:42.242526 kernel: acpiphp: Slot [29] registered Oct 2 19:14:42.242542 kernel: acpiphp: Slot [30] registered Oct 2 19:14:42.242558 kernel: acpiphp: Slot [31] registered Oct 2 19:14:42.242574 kernel: PCI host bridge to bus 0000:00 Oct 2 19:14:42.242815 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 2 19:14:42.245265 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:14:42.245528 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 2 19:14:42.245750 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 2 19:14:42.246051 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 2 19:14:42.246318 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 2 19:14:42.246554 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 2 19:14:42.246810 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 19:14:42.254367 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 2 19:14:42.254619 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:14:42.254862 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 19:14:42.255124 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 2 19:14:42.255341 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 2 19:14:42.255550 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 2 19:14:42.255755 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:14:42.255965 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 2 19:14:42.256254 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 2 19:14:42.256478 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 2 19:14:42.256698 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 2 19:14:42.256912 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 2 19:14:42.257193 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 2 19:14:42.257418 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:14:42.257608 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 2 19:14:42.257641 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:14:42.257659 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:14:42.257677 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:14:42.257694 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:14:42.257710 kernel: iommu: Default domain type: Translated Oct 2 19:14:42.257727 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:14:42.257743 kernel: vgaarb: loaded Oct 2 19:14:42.257759 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:14:42.257776 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:14:42.257796 kernel: PTP clock support registered Oct 2 19:14:42.257813 kernel: Registered efivars operations Oct 2 19:14:42.257829 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:14:42.257845 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:14:42.257862 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:14:42.257878 kernel: pnp: PnP ACPI init Oct 2 19:14:42.258145 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 2 19:14:42.258180 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:14:42.258197 kernel: NET: Registered PF_INET protocol family Oct 2 19:14:42.258221 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:14:42.258239 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:14:42.258255 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:14:42.258272 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:14:42.258288 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:14:42.258305 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:14:42.258322 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:14:42.258338 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:14:42.258355 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:14:42.258376 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:14:42.258392 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 2 19:14:42.258409 kernel: kvm [1]: HYP mode not available Oct 2 19:14:42.258425 kernel: Initialise system trusted keyrings Oct 2 19:14:42.258442 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:14:42.258459 kernel: Key type asymmetric registered Oct 2 19:14:42.258475 kernel: Asymmetric key parser 'x509' registered Oct 2 19:14:42.258491 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:14:42.258508 kernel: io scheduler mq-deadline registered Oct 2 19:14:42.258528 kernel: io scheduler kyber registered Oct 2 19:14:42.258544 kernel: io scheduler bfq registered Oct 2 19:14:42.258761 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 2 19:14:42.258789 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:14:42.258806 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:14:42.258841 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:14:42.258861 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 2 19:14:42.259316 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 2 19:14:42.259357 kernel: printk: console [ttyS0] disabled Oct 2 19:14:42.259374 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 2 19:14:42.259391 kernel: printk: console [ttyS0] enabled Oct 2 19:14:42.259407 kernel: printk: bootconsole [uart0] disabled Oct 2 19:14:42.259423 kernel: thunder_xcv, ver 1.0 Oct 2 19:14:42.259439 kernel: thunder_bgx, ver 1.0 Oct 2 19:14:42.259455 kernel: nicpf, ver 1.0 Oct 2 19:14:42.259471 kernel: nicvf, ver 1.0 Oct 2 19:14:42.259703 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:14:42.263291 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:14:41 UTC (1696274081) Oct 2 19:14:42.263337 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:14:42.263355 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:14:42.263372 kernel: Segment Routing with IPv6 Oct 2 19:14:42.263389 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:14:42.263406 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:14:42.263423 kernel: Key type dns_resolver registered Oct 2 19:14:42.263439 kernel: registered taskstats version 1 Oct 2 19:14:42.263464 kernel: Loading compiled-in X.509 certificates Oct 2 19:14:42.263481 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:14:42.263497 kernel: Key type .fscrypt registered Oct 2 19:14:42.263513 kernel: Key type fscrypt-provisioning registered Oct 2 19:14:42.263529 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:14:42.263545 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:14:42.263561 kernel: ima: No architecture policies found Oct 2 19:14:42.263578 kernel: Freeing unused kernel memory: 34560K Oct 2 19:14:42.263594 kernel: Run /init as init process Oct 2 19:14:42.263614 kernel: with arguments: Oct 2 19:14:42.263630 kernel: /init Oct 2 19:14:42.263646 kernel: with environment: Oct 2 19:14:42.263662 kernel: HOME=/ Oct 2 19:14:42.263678 kernel: TERM=linux Oct 2 19:14:42.263694 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:14:42.263715 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:14:42.263736 systemd[1]: Detected virtualization amazon. Oct 2 19:14:42.263759 systemd[1]: Detected architecture arm64. Oct 2 19:14:42.263776 systemd[1]: Running in initrd. Oct 2 19:14:42.263794 systemd[1]: No hostname configured, using default hostname. Oct 2 19:14:42.263811 systemd[1]: Hostname set to . Oct 2 19:14:42.263829 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:14:42.263847 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:14:42.263865 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:14:42.263882 systemd[1]: Reached target cryptsetup.target. Oct 2 19:14:42.263904 systemd[1]: Reached target paths.target. Oct 2 19:14:42.263921 systemd[1]: Reached target slices.target. Oct 2 19:14:42.263939 systemd[1]: Reached target swap.target. Oct 2 19:14:42.263956 systemd[1]: Reached target timers.target. Oct 2 19:14:42.263975 systemd[1]: Listening on iscsid.socket. Oct 2 19:14:42.264061 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:14:42.264085 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:14:42.264104 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:14:42.264128 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:14:42.264147 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:14:42.264164 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:14:42.264182 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:14:42.264200 systemd[1]: Reached target sockets.target. Oct 2 19:14:42.264219 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:14:42.264237 systemd[1]: Finished network-cleanup.service. Oct 2 19:14:42.264255 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:14:42.264272 systemd[1]: Starting systemd-journald.service... Oct 2 19:14:42.264294 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:14:42.264312 systemd[1]: Starting systemd-resolved.service... Oct 2 19:14:42.264330 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:14:42.264347 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:14:42.264365 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:14:42.264383 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:14:42.264401 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:14:42.264419 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:14:42.264437 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:14:42.264460 kernel: audit: type=1130 audit(1696274082.229:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.264477 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:14:42.264499 systemd-journald[309]: Journal started Oct 2 19:14:42.264592 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2c990c539a33b43371daee8287f59c) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:14:42.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.179061 systemd-modules-load[310]: Inserted module 'overlay' Oct 2 19:14:42.269448 systemd[1]: Started systemd-journald.service. Oct 2 19:14:42.269488 kernel: Bridge firewalling registered Oct 2 19:14:42.249446 systemd-resolved[311]: Positive Trust Anchors: Oct 2 19:14:42.249471 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:14:42.249525 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:14:42.273045 systemd-modules-load[310]: Inserted module 'br_netfilter' Oct 2 19:14:42.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.309047 kernel: audit: type=1130 audit(1696274082.298:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.332052 kernel: SCSI subsystem initialized Oct 2 19:14:42.339652 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:14:42.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.351917 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:14:42.362049 kernel: audit: type=1130 audit(1696274082.340:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.370997 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:14:42.371086 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:14:42.375296 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:14:42.384512 systemd-modules-load[310]: Inserted module 'dm_multipath' Oct 2 19:14:42.388778 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:14:42.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.392480 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:14:42.408037 kernel: audit: type=1130 audit(1696274082.390:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.425043 dracut-cmdline[326]: dracut-dracut-053 Oct 2 19:14:42.438365 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:14:42.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.449184 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:14:42.469244 kernel: audit: type=1130 audit(1696274082.440:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.707044 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:14:42.721045 kernel: iscsi: registered transport (tcp) Oct 2 19:14:42.748427 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:14:42.748500 kernel: QLogic iSCSI HBA Driver Oct 2 19:14:42.896038 kernel: random: crng init done Oct 2 19:14:42.896225 systemd-resolved[311]: Defaulting to hostname 'linux'. Oct 2 19:14:42.900614 systemd[1]: Started systemd-resolved.service. Oct 2 19:14:42.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.905793 systemd[1]: Reached target nss-lookup.target. Oct 2 19:14:42.918039 kernel: audit: type=1130 audit(1696274082.904:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.965791 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:14:42.977467 kernel: audit: type=1130 audit(1696274082.966:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:42.971717 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:14:43.070064 kernel: raid6: neonx8 gen() 6399 MB/s Oct 2 19:14:43.088056 kernel: raid6: neonx8 xor() 4639 MB/s Oct 2 19:14:43.106053 kernel: raid6: neonx4 gen() 6581 MB/s Oct 2 19:14:43.124038 kernel: raid6: neonx4 xor() 4829 MB/s Oct 2 19:14:43.142038 kernel: raid6: neonx2 gen() 5833 MB/s Oct 2 19:14:43.160040 kernel: raid6: neonx2 xor() 4445 MB/s Oct 2 19:14:43.178036 kernel: raid6: neonx1 gen() 4513 MB/s Oct 2 19:14:43.196039 kernel: raid6: neonx1 xor() 3656 MB/s Oct 2 19:14:43.214037 kernel: raid6: int64x8 gen() 3436 MB/s Oct 2 19:14:43.232038 kernel: raid6: int64x8 xor() 2092 MB/s Oct 2 19:14:43.250037 kernel: raid6: int64x4 gen() 3845 MB/s Oct 2 19:14:43.268038 kernel: raid6: int64x4 xor() 2201 MB/s Oct 2 19:14:43.286037 kernel: raid6: int64x2 gen() 3616 MB/s Oct 2 19:14:43.304038 kernel: raid6: int64x2 xor() 1954 MB/s Oct 2 19:14:43.322039 kernel: raid6: int64x1 gen() 2774 MB/s Oct 2 19:14:43.341567 kernel: raid6: int64x1 xor() 1455 MB/s Oct 2 19:14:43.341598 kernel: raid6: using algorithm neonx4 gen() 6581 MB/s Oct 2 19:14:43.341632 kernel: raid6: .... xor() 4829 MB/s, rmw enabled Oct 2 19:14:43.343390 kernel: raid6: using neon recovery algorithm Oct 2 19:14:43.362049 kernel: xor: measuring software checksum speed Oct 2 19:14:43.365035 kernel: 8regs : 9339 MB/sec Oct 2 19:14:43.367037 kernel: 32regs : 11107 MB/sec Oct 2 19:14:43.371348 kernel: arm64_neon : 9568 MB/sec Oct 2 19:14:43.371383 kernel: xor: using function: 32regs (11107 MB/sec) Oct 2 19:14:43.462057 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:14:43.500653 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:14:43.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:43.510000 audit: BPF prog-id=7 op=LOAD Oct 2 19:14:43.514227 kernel: audit: type=1130 audit(1696274083.501:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:43.514284 kernel: audit: type=1334 audit(1696274083.510:10): prog-id=7 op=LOAD Oct 2 19:14:43.512470 systemd[1]: Starting systemd-udevd.service... Oct 2 19:14:43.510000 audit: BPF prog-id=8 op=LOAD Oct 2 19:14:43.549746 systemd-udevd[508]: Using default interface naming scheme 'v252'. Oct 2 19:14:43.560838 systemd[1]: Started systemd-udevd.service. Oct 2 19:14:43.568238 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:14:43.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:43.625677 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation Oct 2 19:14:43.744094 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:14:43.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:43.748792 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:14:43.872493 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:14:43.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:44.031464 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:14:44.031529 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 2 19:14:44.046724 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 2 19:14:44.046801 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 19:14:44.047108 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 19:14:44.049218 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 19:14:44.058031 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:18:c1:76:be:2d Oct 2 19:14:44.062034 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 19:14:44.068562 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:14:44.068613 kernel: GPT:9289727 != 16777215 Oct 2 19:14:44.068636 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:14:44.070769 kernel: GPT:9289727 != 16777215 Oct 2 19:14:44.072093 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:14:44.074014 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:44.079526 (udev-worker)[566]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:14:44.157047 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (573) Oct 2 19:14:44.264384 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:14:44.314970 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:14:44.380074 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:14:44.384467 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:14:44.399280 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:14:44.413092 systemd[1]: Starting disk-uuid.service... Oct 2 19:14:44.440047 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:44.443182 disk-uuid[670]: Primary Header is updated. Oct 2 19:14:44.443182 disk-uuid[670]: Secondary Entries is updated. Oct 2 19:14:44.443182 disk-uuid[670]: Secondary Header is updated. Oct 2 19:14:44.460040 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:45.473047 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:45.473224 disk-uuid[671]: The operation has completed successfully. Oct 2 19:14:45.766893 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:14:45.779816 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 2 19:14:45.779866 kernel: audit: type=1130 audit(1696274085.769:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.767113 systemd[1]: Finished disk-uuid.service. Oct 2 19:14:45.790035 kernel: audit: type=1131 audit(1696274085.769:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.772378 systemd[1]: Starting verity-setup.service... Oct 2 19:14:45.826063 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:14:45.934911 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:14:45.939833 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:14:45.948025 systemd[1]: Finished verity-setup.service. Oct 2 19:14:45.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.959100 kernel: audit: type=1130 audit(1696274085.947:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.045058 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:14:46.046472 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:14:46.048363 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:14:46.049823 systemd[1]: Starting ignition-setup.service... Oct 2 19:14:46.053475 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:14:46.102943 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:14:46.103054 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:14:46.105225 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:14:46.124046 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:14:46.158834 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:14:46.192830 systemd[1]: Finished ignition-setup.service. Oct 2 19:14:46.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.197797 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:14:46.210043 kernel: audit: type=1130 audit(1696274086.195:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.426790 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:14:46.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.436000 audit: BPF prog-id=9 op=LOAD Oct 2 19:14:46.440518 kernel: audit: type=1130 audit(1696274086.429:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.440573 kernel: audit: type=1334 audit(1696274086.436:20): prog-id=9 op=LOAD Oct 2 19:14:46.438873 systemd[1]: Starting systemd-networkd.service... Oct 2 19:14:46.496567 systemd-networkd[1194]: lo: Link UP Oct 2 19:14:46.496592 systemd-networkd[1194]: lo: Gained carrier Oct 2 19:14:46.500345 systemd-networkd[1194]: Enumeration completed Oct 2 19:14:46.502293 systemd-networkd[1194]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:14:46.503549 systemd[1]: Started systemd-networkd.service. Oct 2 19:14:46.521815 kernel: audit: type=1130 audit(1696274086.507:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.509132 systemd[1]: Reached target network.target. Oct 2 19:14:46.521264 systemd-networkd[1194]: eth0: Link UP Oct 2 19:14:46.521272 systemd-networkd[1194]: eth0: Gained carrier Oct 2 19:14:46.523456 systemd[1]: Starting iscsiuio.service... Oct 2 19:14:46.542717 systemd[1]: Started iscsiuio.service. Oct 2 19:14:46.547144 systemd[1]: Starting iscsid.service... Oct 2 19:14:46.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.560044 kernel: audit: type=1130 audit(1696274086.543:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.563806 iscsid[1199]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:14:46.563806 iscsid[1199]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:14:46.563806 iscsid[1199]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:14:46.563806 iscsid[1199]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:14:46.563806 iscsid[1199]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:14:46.563806 iscsid[1199]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:14:46.607577 kernel: audit: type=1130 audit(1696274086.586:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.563286 systemd-networkd[1194]: eth0: DHCPv4 address 172.31.22.12/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:14:46.584073 systemd[1]: Started iscsid.service. Oct 2 19:14:46.601112 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:14:46.638682 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:14:46.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.655761 kernel: audit: type=1130 audit(1696274086.641:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.642764 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:14:46.656061 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:14:46.660888 systemd[1]: Reached target remote-fs.target. Oct 2 19:14:46.666687 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:14:46.703093 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:14:46.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.904803 ignition[1116]: Ignition 2.14.0 Oct 2 19:14:46.904830 ignition[1116]: Stage: fetch-offline Oct 2 19:14:46.905283 ignition[1116]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:46.907904 ignition[1116]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:46.926824 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:46.929615 ignition[1116]: Ignition finished successfully Oct 2 19:14:46.932676 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:14:46.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.937180 systemd[1]: Starting ignition-fetch.service... Oct 2 19:14:46.967375 ignition[1218]: Ignition 2.14.0 Oct 2 19:14:46.967404 ignition[1218]: Stage: fetch Oct 2 19:14:46.967773 ignition[1218]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:46.967837 ignition[1218]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:46.987095 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:46.989557 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:46.998589 ignition[1218]: INFO : PUT result: OK Oct 2 19:14:47.001802 ignition[1218]: DEBUG : parsed url from cmdline: "" Oct 2 19:14:47.001802 ignition[1218]: INFO : no config URL provided Oct 2 19:14:47.001802 ignition[1218]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:14:47.007955 ignition[1218]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 19:14:47.007955 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:47.007955 ignition[1218]: INFO : PUT result: OK Oct 2 19:14:47.013833 ignition[1218]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 19:14:47.016883 ignition[1218]: INFO : GET result: OK Oct 2 19:14:47.018519 ignition[1218]: DEBUG : parsing config with SHA512: 005e4c3a8f17b39ed7a1a85c0c1381420386c04e50db3ce2965ca0680b1b032c6248598c6d150fda73f6e3841e49d33ae01160e53cfc11c0a407b5cdaa912b08 Oct 2 19:14:47.046700 unknown[1218]: fetched base config from "system" Oct 2 19:14:47.046964 unknown[1218]: fetched base config from "system" Oct 2 19:14:47.048046 ignition[1218]: fetch: fetch complete Oct 2 19:14:47.046980 unknown[1218]: fetched user config from "aws" Oct 2 19:14:47.048060 ignition[1218]: fetch: fetch passed Oct 2 19:14:47.048141 ignition[1218]: Ignition finished successfully Oct 2 19:14:47.058707 systemd[1]: Finished ignition-fetch.service. Oct 2 19:14:47.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:47.063247 systemd[1]: Starting ignition-kargs.service... Oct 2 19:14:47.096142 ignition[1224]: Ignition 2.14.0 Oct 2 19:14:47.096171 ignition[1224]: Stage: kargs Oct 2 19:14:47.096520 ignition[1224]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:47.096577 ignition[1224]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:47.112495 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:47.114865 ignition[1224]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:47.118496 ignition[1224]: INFO : PUT result: OK Oct 2 19:14:47.123092 ignition[1224]: kargs: kargs passed Oct 2 19:14:47.123218 ignition[1224]: Ignition finished successfully Oct 2 19:14:47.126979 systemd[1]: Finished ignition-kargs.service. Oct 2 19:14:47.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:47.131577 systemd[1]: Starting ignition-disks.service... Oct 2 19:14:47.161361 ignition[1230]: Ignition 2.14.0 Oct 2 19:14:47.161391 ignition[1230]: Stage: disks Oct 2 19:14:47.161746 ignition[1230]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:47.161803 ignition[1230]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:47.178987 ignition[1230]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:47.181978 ignition[1230]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:47.185079 ignition[1230]: INFO : PUT result: OK Oct 2 19:14:47.189892 ignition[1230]: disks: disks passed Oct 2 19:14:47.190033 ignition[1230]: Ignition finished successfully Oct 2 19:14:47.193525 systemd[1]: Finished ignition-disks.service. Oct 2 19:14:47.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:47.196576 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:14:47.198661 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:14:47.201821 systemd[1]: Reached target local-fs.target. Oct 2 19:14:47.203407 systemd[1]: Reached target sysinit.target. Oct 2 19:14:47.204998 systemd[1]: Reached target basic.target. Oct 2 19:14:47.207961 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:14:47.260132 systemd-fsck[1238]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:14:47.270559 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:14:47.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:47.273565 systemd[1]: Mounting sysroot.mount... Oct 2 19:14:47.307068 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:14:47.309227 systemd[1]: Mounted sysroot.mount. Oct 2 19:14:47.310872 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:14:47.328587 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:14:47.332104 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:14:47.332210 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:14:47.332274 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:14:47.359629 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:14:47.372315 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:14:47.374338 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:14:47.413610 initrd-setup-root[1260]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:14:47.421413 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1255) Oct 2 19:14:47.421447 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:14:47.423941 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:14:47.426966 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:14:47.438043 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:14:47.442182 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:14:47.453900 initrd-setup-root[1286]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:14:47.471404 initrd-setup-root[1294]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:14:47.490963 initrd-setup-root[1302]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:14:47.714495 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:14:47.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:47.718377 systemd[1]: Starting ignition-mount.service... Oct 2 19:14:47.728729 systemd[1]: Starting sysroot-boot.service... Oct 2 19:14:47.753701 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:14:47.753873 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:14:47.783861 systemd[1]: Finished sysroot-boot.service. Oct 2 19:14:47.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:47.806314 ignition[1322]: INFO : Ignition 2.14.0 Oct 2 19:14:47.806314 ignition[1322]: INFO : Stage: mount Oct 2 19:14:47.809765 ignition[1322]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:47.809765 ignition[1322]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:47.827640 ignition[1322]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:47.830884 ignition[1322]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:47.834352 ignition[1322]: INFO : PUT result: OK Oct 2 19:14:47.839318 ignition[1322]: INFO : mount: mount passed Oct 2 19:14:47.841656 ignition[1322]: INFO : Ignition finished successfully Oct 2 19:14:47.844574 systemd[1]: Finished ignition-mount.service. Oct 2 19:14:47.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:47.847427 systemd[1]: Starting ignition-files.service... Oct 2 19:14:47.859055 systemd-networkd[1194]: eth0: Gained IPv6LL Oct 2 19:14:47.872861 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:14:47.896077 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1330) Oct 2 19:14:47.902139 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:14:47.902209 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:14:47.902233 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:14:47.911027 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:14:47.916155 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:14:47.949851 ignition[1349]: INFO : Ignition 2.14.0 Oct 2 19:14:47.949851 ignition[1349]: INFO : Stage: files Oct 2 19:14:47.953385 ignition[1349]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:47.953385 ignition[1349]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:47.968618 ignition[1349]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:47.971162 ignition[1349]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:47.974783 ignition[1349]: INFO : PUT result: OK Oct 2 19:14:47.979949 ignition[1349]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:14:47.984757 ignition[1349]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:14:47.984757 ignition[1349]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:14:48.032575 ignition[1349]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:14:48.035854 ignition[1349]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:14:48.039763 unknown[1349]: wrote ssh authorized keys file for user: core Oct 2 19:14:48.042070 ignition[1349]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:14:48.045476 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:14:48.049411 ignition[1349]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:14:48.403282 ignition[1349]: INFO : GET result: OK Oct 2 19:14:48.892564 ignition[1349]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:14:48.897354 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:14:48.897354 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Oct 2 19:14:48.897354 ignition[1349]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Oct 2 19:14:48.995835 ignition[1349]: INFO : GET result: OK Oct 2 19:14:49.290240 ignition[1349]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Oct 2 19:14:49.295396 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Oct 2 19:14:49.295396 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:14:49.295396 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:49.315442 ignition[1349]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem907682946" Oct 2 19:14:49.319782 ignition[1349]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem907682946": device or resource busy Oct 2 19:14:49.327502 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem907682946", trying btrfs: device or resource busy Oct 2 19:14:49.327502 ignition[1349]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem907682946" Oct 2 19:14:49.334174 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1349) Oct 2 19:14:49.334411 ignition[1349]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem907682946" Oct 2 19:14:49.352058 ignition[1349]: INFO : op(3): [started] unmounting "/mnt/oem907682946" Oct 2 19:14:49.354619 ignition[1349]: INFO : op(3): [finished] unmounting "/mnt/oem907682946" Oct 2 19:14:49.356967 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:14:49.360607 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:14:49.360607 ignition[1349]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:14:49.369422 systemd[1]: mnt-oem907682946.mount: Deactivated successfully. Oct 2 19:14:49.451373 ignition[1349]: INFO : GET result: OK Oct 2 19:14:50.507458 ignition[1349]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Oct 2 19:14:50.513082 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:14:50.513082 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:14:50.513082 ignition[1349]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:14:50.569036 ignition[1349]: INFO : GET result: OK Oct 2 19:14:52.621664 ignition[1349]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Oct 2 19:14:52.626946 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:14:52.626946 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:14:52.626946 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:14:52.626946 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:14:52.640911 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:14:52.640911 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:14:52.648341 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:52.666438 ignition[1349]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3240232391" Oct 2 19:14:52.666438 ignition[1349]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3240232391": device or resource busy Oct 2 19:14:52.666438 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3240232391", trying btrfs: device or resource busy Oct 2 19:14:52.666438 ignition[1349]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3240232391" Oct 2 19:14:52.666438 ignition[1349]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3240232391" Oct 2 19:14:52.691058 ignition[1349]: INFO : op(6): [started] unmounting "/mnt/oem3240232391" Oct 2 19:14:52.691058 ignition[1349]: INFO : op(6): [finished] unmounting "/mnt/oem3240232391" Oct 2 19:14:52.691058 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:14:52.691058 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:14:52.691058 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:52.689787 systemd[1]: mnt-oem3240232391.mount: Deactivated successfully. Oct 2 19:14:52.722417 ignition[1349]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1989440126" Oct 2 19:14:52.725444 ignition[1349]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1989440126": device or resource busy Oct 2 19:14:52.725444 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1989440126", trying btrfs: device or resource busy Oct 2 19:14:52.725444 ignition[1349]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1989440126" Oct 2 19:14:52.735388 ignition[1349]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1989440126" Oct 2 19:14:52.735388 ignition[1349]: INFO : op(9): [started] unmounting "/mnt/oem1989440126" Oct 2 19:14:52.740614 ignition[1349]: INFO : op(9): [finished] unmounting "/mnt/oem1989440126" Oct 2 19:14:52.740614 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:14:52.740614 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:14:52.740614 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:52.769075 ignition[1349]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem788624268" Oct 2 19:14:52.769075 ignition[1349]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem788624268": device or resource busy Oct 2 19:14:52.769075 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem788624268", trying btrfs: device or resource busy Oct 2 19:14:52.769075 ignition[1349]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem788624268" Oct 2 19:14:52.783251 ignition[1349]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem788624268" Oct 2 19:14:52.783251 ignition[1349]: INFO : op(c): [started] unmounting "/mnt/oem788624268" Oct 2 19:14:52.783251 ignition[1349]: INFO : op(c): [finished] unmounting "/mnt/oem788624268" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(d): [started] processing unit "amazon-ssm-agent.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(d): op(e): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(d): op(e): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(d): [finished] processing unit "amazon-ssm-agent.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(f): [started] processing unit "nvidia.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(f): [finished] processing unit "nvidia.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:14:52.783251 ignition[1349]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Oct 2 19:14:52.845445 ignition[1349]: INFO : files: op(15): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:14:52.845445 ignition[1349]: INFO : files: op(15): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:14:52.845445 ignition[1349]: INFO : files: op(16): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:14:52.845445 ignition[1349]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:14:52.845445 ignition[1349]: INFO : files: op(17): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:14:52.845445 ignition[1349]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:14:52.845445 ignition[1349]: INFO : files: op(18): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:14:52.845445 ignition[1349]: INFO : files: op(18): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:14:52.845445 ignition[1349]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Oct 2 19:14:52.845445 ignition[1349]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:14:52.884680 ignition[1349]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:14:52.884680 ignition[1349]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:14:52.884680 ignition[1349]: INFO : files: files passed Oct 2 19:14:52.884680 ignition[1349]: INFO : Ignition finished successfully Oct 2 19:14:52.898926 systemd[1]: Finished ignition-files.service. Oct 2 19:14:52.911469 kernel: kauditd_printk_skb: 9 callbacks suppressed Oct 2 19:14:52.911551 kernel: audit: type=1130 audit(1696274092.901:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.914542 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:14:52.916600 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:14:52.920297 systemd[1]: Starting ignition-quench.service... Oct 2 19:14:52.951260 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:14:52.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.951459 systemd[1]: Finished ignition-quench.service. Oct 2 19:14:52.972112 kernel: audit: type=1130 audit(1696274092.953:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.972154 kernel: audit: type=1131 audit(1696274092.953:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.979328 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:14:52.983786 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:14:53.002584 kernel: audit: type=1130 audit(1696274092.985:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.987048 systemd[1]: Reached target ignition-complete.target. Oct 2 19:14:52.997294 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:14:53.052476 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:14:53.053480 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:14:53.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.059384 systemd[1]: Reached target initrd-fs.target. Oct 2 19:14:53.075612 kernel: audit: type=1130 audit(1696274093.057:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.075659 kernel: audit: type=1131 audit(1696274093.058:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.075693 systemd[1]: Reached target initrd.target. Oct 2 19:14:53.082169 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:14:53.085915 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:14:53.129940 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:14:53.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.135660 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:14:53.143909 kernel: audit: type=1130 audit(1696274093.132:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.166615 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:14:53.170251 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:14:53.184826 systemd[1]: Stopped target timers.target. Oct 2 19:14:53.188172 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:14:53.190474 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:14:53.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.194200 systemd[1]: Stopped target initrd.target. Oct 2 19:14:53.204218 kernel: audit: type=1131 audit(1696274093.192:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.204465 systemd[1]: Stopped target basic.target. Oct 2 19:14:53.206667 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:14:53.210361 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:14:53.213204 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:14:53.216750 systemd[1]: Stopped target remote-fs.target. Oct 2 19:14:53.219046 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:14:53.221921 systemd[1]: Stopped target sysinit.target. Oct 2 19:14:53.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.224579 systemd[1]: Stopped target local-fs.target. Oct 2 19:14:53.247284 kernel: audit: type=1131 audit(1696274093.234:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.226591 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:14:53.229098 systemd[1]: Stopped target swap.target. Oct 2 19:14:53.262989 kernel: audit: type=1131 audit(1696274093.248:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.231588 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:14:53.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.231927 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:14:53.244242 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:14:53.248577 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:14:53.248944 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:14:53.258735 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:14:53.259163 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:14:53.262613 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:14:53.263103 systemd[1]: Stopped ignition-files.service. Oct 2 19:14:53.303198 iscsid[1199]: iscsid shutting down. Oct 2 19:14:53.268720 systemd[1]: Stopping ignition-mount.service... Oct 2 19:14:53.303023 systemd[1]: Stopping iscsid.service... Oct 2 19:14:53.310247 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:14:53.312829 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:14:53.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.319393 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:14:53.323164 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:14:53.326112 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:14:53.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.333113 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:14:53.335634 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:14:53.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.343471 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:14:53.345747 systemd[1]: Stopped iscsid.service. Oct 2 19:14:53.347312 ignition[1387]: INFO : Ignition 2.14.0 Oct 2 19:14:53.347312 ignition[1387]: INFO : Stage: umount Oct 2 19:14:53.350861 ignition[1387]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:53.350861 ignition[1387]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:53.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.371665 systemd[1]: Stopping iscsiuio.service... Oct 2 19:14:53.379777 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:14:53.380432 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:14:53.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.391605 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:14:53.392595 systemd[1]: Stopped iscsiuio.service. Oct 2 19:14:53.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.402321 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:53.405591 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:53.409524 ignition[1387]: INFO : PUT result: OK Oct 2 19:14:53.413483 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:14:53.417169 ignition[1387]: INFO : umount: umount passed Oct 2 19:14:53.419226 ignition[1387]: INFO : Ignition finished successfully Oct 2 19:14:53.424645 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:14:53.425633 systemd[1]: Stopped ignition-mount.service. Oct 2 19:14:53.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.429850 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:14:53.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.429968 systemd[1]: Stopped ignition-disks.service. Oct 2 19:14:53.441000 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:14:53.442500 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:14:53.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.444759 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:14:53.445167 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:14:53.454908 systemd[1]: Stopped target network.target. Oct 2 19:14:53.457808 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:14:53.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.457950 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:14:53.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.463729 systemd[1]: Stopped target paths.target. Oct 2 19:14:53.463900 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:14:53.468942 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:14:53.472450 systemd[1]: Stopped target slices.target. Oct 2 19:14:53.475473 systemd[1]: Stopped target sockets.target. Oct 2 19:14:53.478561 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:14:53.478670 systemd[1]: Closed iscsid.socket. Oct 2 19:14:53.483228 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:14:53.483356 systemd[1]: Closed iscsiuio.socket. Oct 2 19:14:53.486478 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:14:53.489931 systemd[1]: Stopped ignition-setup.service. Oct 2 19:14:53.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.493357 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:14:53.496595 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:14:53.500223 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:14:53.500464 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:14:53.501247 systemd-networkd[1194]: eth0: DHCPv6 lease lost Oct 2 19:14:53.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.509466 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:14:53.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.509685 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:14:53.513149 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:14:53.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.519000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:14:53.513226 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:14:53.515334 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:14:53.515518 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:14:53.527090 systemd[1]: Stopping network-cleanup.service... Oct 2 19:14:53.535666 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:14:53.536665 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:14:53.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.541811 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:14:53.542404 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:14:53.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.547594 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:14:53.548608 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:14:53.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.553784 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:14:53.558382 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:14:53.560493 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:14:53.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.566900 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:14:53.568709 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:14:53.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.569000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:14:53.573347 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:14:53.573456 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:14:53.588083 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:14:53.588187 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:14:53.593768 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:14:53.593903 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:14:53.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.599436 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:14:53.599571 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:14:53.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.604717 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:14:53.604858 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:14:53.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.611581 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:14:53.633143 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:14:53.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.633277 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:14:53.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.642742 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:14:53.643515 systemd[1]: Stopped network-cleanup.service. Oct 2 19:14:53.647546 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:14:53.648612 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:14:53.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.657021 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:14:53.662280 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:14:53.678713 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:14:53.694537 systemd[1]: Switching root. Oct 2 19:14:53.724448 systemd-journald[309]: Journal stopped Oct 2 19:14:59.517566 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Oct 2 19:14:59.534551 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:14:59.535379 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:14:59.535564 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:14:59.535598 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:14:59.535630 kernel: SELinux: policy capability open_perms=1 Oct 2 19:14:59.535661 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:14:59.535692 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:14:59.535722 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:14:59.540074 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:14:59.540139 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:14:59.540169 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:14:59.540214 systemd[1]: Successfully loaded SELinux policy in 85.214ms. Oct 2 19:14:59.540496 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.882ms. Oct 2 19:14:59.540535 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:14:59.540570 systemd[1]: Detected virtualization amazon. Oct 2 19:14:59.540604 systemd[1]: Detected architecture arm64. Oct 2 19:14:59.540636 systemd[1]: Detected first boot. Oct 2 19:14:59.540668 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:14:59.540702 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:14:59.540733 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:14:59.540770 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:14:59.540805 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:14:59.540900 kernel: kauditd_printk_skb: 39 callbacks suppressed Oct 2 19:14:59.540932 kernel: audit: type=1334 audit(1696274099.001:83): prog-id=12 op=LOAD Oct 2 19:14:59.540960 kernel: audit: type=1334 audit(1696274099.001:84): prog-id=3 op=UNLOAD Oct 2 19:14:59.540990 kernel: audit: type=1334 audit(1696274099.003:85): prog-id=13 op=LOAD Oct 2 19:14:59.544083 kernel: audit: type=1334 audit(1696274099.006:86): prog-id=14 op=LOAD Oct 2 19:14:59.544123 kernel: audit: type=1334 audit(1696274099.006:87): prog-id=4 op=UNLOAD Oct 2 19:14:59.544153 kernel: audit: type=1334 audit(1696274099.006:88): prog-id=5 op=UNLOAD Oct 2 19:14:59.544185 kernel: audit: type=1334 audit(1696274099.011:89): prog-id=15 op=LOAD Oct 2 19:14:59.544217 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:14:59.544253 kernel: audit: type=1334 audit(1696274099.011:90): prog-id=12 op=UNLOAD Oct 2 19:14:59.544289 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:14:59.544320 kernel: audit: type=1334 audit(1696274099.013:91): prog-id=16 op=LOAD Oct 2 19:14:59.544350 kernel: audit: type=1334 audit(1696274099.016:92): prog-id=17 op=LOAD Oct 2 19:14:59.544383 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:14:59.544416 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:14:59.544452 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:14:59.544485 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:14:59.544605 systemd[1]: Created slice system-getty.slice. Oct 2 19:14:59.544966 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:14:59.550943 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:14:59.551141 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:14:59.551179 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:14:59.551210 systemd[1]: Created slice user.slice. Oct 2 19:14:59.551241 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:14:59.551273 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:14:59.551303 systemd[1]: Set up automount boot.automount. Oct 2 19:14:59.556042 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:14:59.556126 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:14:59.556162 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:14:59.556198 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:14:59.556232 systemd[1]: Reached target integritysetup.target. Oct 2 19:14:59.556263 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:14:59.556294 systemd[1]: Reached target remote-fs.target. Oct 2 19:14:59.556327 systemd[1]: Reached target slices.target. Oct 2 19:14:59.556357 systemd[1]: Reached target swap.target. Oct 2 19:14:59.556387 systemd[1]: Reached target torcx.target. Oct 2 19:14:59.556786 systemd[1]: Reached target veritysetup.target. Oct 2 19:14:59.557454 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:14:59.557489 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:14:59.557525 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:14:59.557557 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:14:59.557587 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:14:59.557617 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:14:59.557647 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:14:59.557677 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:14:59.557708 systemd[1]: Mounting media.mount... Oct 2 19:14:59.557739 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:14:59.557771 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:14:59.557804 systemd[1]: Mounting tmp.mount... Oct 2 19:14:59.557834 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:14:59.557867 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:14:59.557896 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:14:59.557927 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:14:59.557957 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:14:59.557988 systemd[1]: Starting modprobe@drm.service... Oct 2 19:14:59.558062 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:14:59.558096 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:14:59.558132 systemd[1]: Starting modprobe@loop.service... Oct 2 19:14:59.558169 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:14:59.558199 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:14:59.558230 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:14:59.558262 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:14:59.558291 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:14:59.558321 systemd[1]: Stopped systemd-journald.service. Oct 2 19:14:59.558352 systemd[1]: Starting systemd-journald.service... Oct 2 19:14:59.558383 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:14:59.558417 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:14:59.558447 kernel: fuse: init (API version 7.34) Oct 2 19:14:59.558481 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:14:59.558512 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:14:59.558543 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:14:59.558575 systemd[1]: Stopped verity-setup.service. Oct 2 19:14:59.558604 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:14:59.558636 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:14:59.558668 systemd[1]: Mounted media.mount. Oct 2 19:14:59.558701 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:14:59.558730 kernel: loop: module loaded Oct 2 19:14:59.558773 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:14:59.558807 systemd[1]: Mounted tmp.mount. Oct 2 19:14:59.558837 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:14:59.558866 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:14:59.558897 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:14:59.558928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:14:59.558958 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:14:59.558993 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:14:59.559040 systemd[1]: Finished modprobe@drm.service. Oct 2 19:14:59.559072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:14:59.559102 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:14:59.559131 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:14:59.559170 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:14:59.559200 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:14:59.559229 systemd[1]: Finished modprobe@loop.service. Oct 2 19:14:59.559259 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:14:59.559290 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:14:59.559387 systemd[1]: Reached target network-pre.target. Oct 2 19:14:59.559422 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:14:59.559456 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:14:59.559486 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:14:59.559516 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:14:59.559545 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:14:59.559575 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:14:59.559604 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:14:59.559634 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:14:59.559669 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:14:59.559709 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:14:59.559746 systemd-journald[1494]: Journal started Oct 2 19:14:59.559844 systemd-journald[1494]: Runtime Journal (/run/log/journal/ec2c990c539a33b43371daee8287f59c) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:14:54.533000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:14:54.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:14:54.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:14:54.692000 audit: BPF prog-id=10 op=LOAD Oct 2 19:14:54.692000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:14:54.692000 audit: BPF prog-id=11 op=LOAD Oct 2 19:14:54.692000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:14:59.001000 audit: BPF prog-id=12 op=LOAD Oct 2 19:14:59.001000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:14:59.003000 audit: BPF prog-id=13 op=LOAD Oct 2 19:14:59.006000 audit: BPF prog-id=14 op=LOAD Oct 2 19:14:59.006000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:14:59.006000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:14:59.011000 audit: BPF prog-id=15 op=LOAD Oct 2 19:14:59.011000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:14:59.013000 audit: BPF prog-id=16 op=LOAD Oct 2 19:14:59.016000 audit: BPF prog-id=17 op=LOAD Oct 2 19:14:59.016000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:14:59.016000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:14:59.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.033000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:14:59.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.315000 audit: BPF prog-id=18 op=LOAD Oct 2 19:14:59.316000 audit: BPF prog-id=19 op=LOAD Oct 2 19:14:59.316000 audit: BPF prog-id=20 op=LOAD Oct 2 19:14:59.316000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:14:59.316000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:14:59.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.512000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:14:59.512000 audit[1494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe50597d0 a2=4000 a3=1 items=0 ppid=1 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:59.512000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:14:59.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.896651 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:14:58.999616 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:14:59.593228 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:14:59.593279 systemd[1]: Started systemd-journald.service. Oct 2 19:14:59.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.908222 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:14:59.018425 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:14:54.908279 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:14:59.586814 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:14:54.908357 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:14:54.908386 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:14:54.908471 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:14:54.908504 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:14:54.908950 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:14:54.909067 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:14:54.909105 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:14:54.910072 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:14:54.910155 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:14:54.910206 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:14:54.910246 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:14:54.910295 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:14:54.910336 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:14:58.109627 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:58Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:14:58.110203 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:58Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:14:58.110450 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:58Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:14:58.110923 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:58Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:14:58.111058 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:58Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:14:58.111199 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:14:58Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:14:59.645994 systemd-journald[1494]: Time spent on flushing to /var/log/journal/ec2c990c539a33b43371daee8287f59c is 63.161ms for 1137 entries. Oct 2 19:14:59.645994 systemd-journald[1494]: System Journal (/var/log/journal/ec2c990c539a33b43371daee8287f59c) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:14:59.766140 systemd-journald[1494]: Received client request to flush runtime journal. Oct 2 19:14:59.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.658967 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:14:59.661120 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:14:59.685196 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:14:59.733129 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:14:59.737514 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:14:59.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.768657 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:14:59.772668 udevadm[1533]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:14:59.781176 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:14:59.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:59.785972 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:14:59.911478 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:14:59.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:00.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:00.533000 audit: BPF prog-id=21 op=LOAD Oct 2 19:15:00.533000 audit: BPF prog-id=22 op=LOAD Oct 2 19:15:00.533000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:15:00.533000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:15:00.531677 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:15:00.535937 systemd[1]: Starting systemd-udevd.service... Oct 2 19:15:00.583675 systemd-udevd[1540]: Using default interface naming scheme 'v252'. Oct 2 19:15:00.624336 systemd[1]: Started systemd-udevd.service. Oct 2 19:15:00.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:00.626000 audit: BPF prog-id=23 op=LOAD Oct 2 19:15:00.629201 systemd[1]: Starting systemd-networkd.service... Oct 2 19:15:00.650000 audit: BPF prog-id=24 op=LOAD Oct 2 19:15:00.650000 audit: BPF prog-id=25 op=LOAD Oct 2 19:15:00.650000 audit: BPF prog-id=26 op=LOAD Oct 2 19:15:00.654028 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:15:00.730993 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:15:00.773400 (udev-worker)[1551]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:15:00.822102 systemd[1]: Started systemd-userdbd.service. Oct 2 19:15:00.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.050545 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1547) Oct 2 19:15:01.069918 systemd-networkd[1546]: lo: Link UP Oct 2 19:15:01.070444 systemd-networkd[1546]: lo: Gained carrier Oct 2 19:15:01.071534 systemd-networkd[1546]: Enumeration completed Oct 2 19:15:01.071859 systemd[1]: Started systemd-networkd.service. Oct 2 19:15:01.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.076043 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:15:01.081178 systemd-networkd[1546]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:15:01.088961 systemd-networkd[1546]: eth0: Link UP Oct 2 19:15:01.089184 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:15:01.089569 systemd-networkd[1546]: eth0: Gained carrier Oct 2 19:15:01.122293 systemd-networkd[1546]: eth0: DHCPv4 address 172.31.22.12/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:15:01.298194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:15:01.300924 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:15:01.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.305247 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:15:01.357519 lvm[1659]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:15:01.395947 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:15:01.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.398257 systemd[1]: Reached target cryptsetup.target. Oct 2 19:15:01.411063 systemd[1]: Starting lvm2-activation.service... Oct 2 19:15:01.425895 lvm[1660]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:15:01.464283 systemd[1]: Finished lvm2-activation.service. Oct 2 19:15:01.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.466317 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:15:01.468197 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:15:01.468390 systemd[1]: Reached target local-fs.target. Oct 2 19:15:01.470224 systemd[1]: Reached target machines.target. Oct 2 19:15:01.483094 systemd[1]: Starting ldconfig.service... Oct 2 19:15:01.485512 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:15:01.485794 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:15:01.488649 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:15:01.492999 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:15:01.499557 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:15:01.501581 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:15:01.501702 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:15:01.506572 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:15:01.573310 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:15:01.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.575992 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1662 (bootctl) Oct 2 19:15:01.578230 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:15:01.613967 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:15:01.620124 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:15:01.638375 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:15:01.688051 systemd-fsck[1671]: fsck.fat 4.2 (2021-01-31) Oct 2 19:15:01.688051 systemd-fsck[1671]: /dev/nvme0n1p1: 236 files, 113463/258078 clusters Oct 2 19:15:01.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.703581 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:15:01.709833 systemd[1]: Mounting boot.mount... Oct 2 19:15:01.743383 systemd[1]: Mounted boot.mount. Oct 2 19:15:01.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.776189 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:15:01.968597 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:15:01.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.973940 systemd[1]: Starting audit-rules.service... Oct 2 19:15:01.986000 audit: BPF prog-id=27 op=LOAD Oct 2 19:15:01.978272 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:15:01.984201 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:15:01.994337 systemd[1]: Starting systemd-resolved.service... Oct 2 19:15:01.996000 audit: BPF prog-id=28 op=LOAD Oct 2 19:15:02.007504 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:15:02.016699 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:15:02.027902 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:15:02.030252 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:15:02.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:02.092000 audit[1693]: SYSTEM_BOOT pid=1693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:15:02.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:02.102466 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:15:02.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:02.225683 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:15:02.228127 systemd[1]: Reached target time-set.target. Oct 2 19:15:02.310172 systemd-networkd[1546]: eth0: Gained IPv6LL Oct 2 19:15:02.313914 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:15:02.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:02.318240 systemd-resolved[1689]: Positive Trust Anchors: Oct 2 19:15:02.318270 systemd-resolved[1689]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:15:02.318322 systemd-resolved[1689]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:15:02.888683 systemd-timesyncd[1692]: Contacted time server 172.245.56.32:123 (0.flatcar.pool.ntp.org). Oct 2 19:15:02.889376 systemd-timesyncd[1692]: Initial clock synchronization to Mon 2023-10-02 19:15:02.888488 UTC. Oct 2 19:15:02.897985 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:15:02.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:03.123530 augenrules[1708]: No rules Oct 2 19:15:03.122000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:15:03.122000 audit[1708]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc4030cb0 a2=420 a3=0 items=0 ppid=1686 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:03.122000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:15:03.125771 systemd-resolved[1689]: Defaulting to hostname 'linux'. Oct 2 19:15:03.126062 systemd[1]: Finished audit-rules.service. Oct 2 19:15:03.130781 systemd[1]: Started systemd-resolved.service. Oct 2 19:15:03.132662 systemd[1]: Reached target network.target. Oct 2 19:15:03.134612 systemd[1]: Reached target network-online.target. Oct 2 19:15:03.136405 systemd[1]: Reached target nss-lookup.target. Oct 2 19:15:03.563057 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:15:03.564121 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:15:03.637633 ldconfig[1661]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:15:03.643958 systemd[1]: Finished ldconfig.service. Oct 2 19:15:03.648015 systemd[1]: Starting systemd-update-done.service... Oct 2 19:15:03.670291 systemd[1]: Finished systemd-update-done.service. Oct 2 19:15:03.672423 systemd[1]: Reached target sysinit.target. Oct 2 19:15:03.674299 systemd[1]: Started motdgen.path. Oct 2 19:15:03.676038 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:15:03.678679 systemd[1]: Started logrotate.timer. Oct 2 19:15:03.680657 systemd[1]: Started mdadm.timer. Oct 2 19:15:03.682251 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:15:03.684114 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:15:03.684197 systemd[1]: Reached target paths.target. Oct 2 19:15:03.685789 systemd[1]: Reached target timers.target. Oct 2 19:15:03.688591 systemd[1]: Listening on dbus.socket. Oct 2 19:15:03.692335 systemd[1]: Starting docker.socket... Oct 2 19:15:03.701559 systemd[1]: Listening on sshd.socket. Oct 2 19:15:03.703569 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:15:03.704598 systemd[1]: Listening on docker.socket. Oct 2 19:15:03.706597 systemd[1]: Reached target sockets.target. Oct 2 19:15:03.708560 systemd[1]: Reached target basic.target. Oct 2 19:15:03.710582 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:15:03.710767 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:15:03.723993 systemd[1]: Started amazon-ssm-agent.service. Oct 2 19:15:03.728605 systemd[1]: Starting containerd.service... Oct 2 19:15:03.732332 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:15:03.740581 systemd[1]: Starting dbus.service... Oct 2 19:15:03.755973 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:15:03.762955 systemd[1]: Starting extend-filesystems.service... Oct 2 19:15:03.764645 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:15:03.767039 systemd[1]: Starting motdgen.service... Oct 2 19:15:03.771142 systemd[1]: Started nvidia.service. Oct 2 19:15:03.776005 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:15:03.779939 systemd[1]: Starting prepare-critools.service... Oct 2 19:15:03.784228 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:15:03.790646 systemd[1]: Starting sshd-keygen.service... Oct 2 19:15:03.796602 systemd[1]: Starting systemd-logind.service... Oct 2 19:15:03.799090 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:15:03.799229 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:15:03.800127 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:15:03.801686 systemd[1]: Starting update-engine.service... Oct 2 19:15:03.807117 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:15:03.819958 jq[1720]: false Oct 2 19:15:03.861821 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:15:03.871283 jq[1730]: true Oct 2 19:15:03.862239 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:15:03.947175 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:15:03.947519 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:15:03.966611 tar[1732]: ./ Oct 2 19:15:03.966611 tar[1732]: ./macvlan Oct 2 19:15:04.017794 tar[1733]: crictl Oct 2 19:15:04.026788 jq[1734]: true Oct 2 19:15:04.083394 extend-filesystems[1721]: Found nvme0n1 Oct 2 19:15:04.085606 extend-filesystems[1721]: Found nvme0n1p1 Oct 2 19:15:04.087293 extend-filesystems[1721]: Found nvme0n1p2 Oct 2 19:15:04.092078 extend-filesystems[1721]: Found nvme0n1p3 Oct 2 19:15:04.096067 extend-filesystems[1721]: Found usr Oct 2 19:15:04.101205 extend-filesystems[1721]: Found nvme0n1p4 Oct 2 19:15:04.103035 extend-filesystems[1721]: Found nvme0n1p6 Oct 2 19:15:04.107187 extend-filesystems[1721]: Found nvme0n1p7 Oct 2 19:15:04.114297 extend-filesystems[1721]: Found nvme0n1p9 Oct 2 19:15:04.116104 extend-filesystems[1721]: Checking size of /dev/nvme0n1p9 Oct 2 19:15:04.139408 dbus-daemon[1719]: [system] SELinux support is enabled Oct 2 19:15:04.140211 systemd[1]: Started dbus.service. Oct 2 19:15:04.145436 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:15:04.145496 systemd[1]: Reached target system-config.target. Oct 2 19:15:04.147415 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:15:04.147451 systemd[1]: Reached target user-config.target. Oct 2 19:15:04.166790 update_engine[1729]: I1002 19:15:04.166231 1729 main.cc:92] Flatcar Update Engine starting Oct 2 19:15:04.170767 dbus-daemon[1719]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1546 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:15:04.203273 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:15:04.219600 systemd[1]: Started update-engine.service. Oct 2 19:15:04.224397 systemd[1]: Started locksmithd.service. Oct 2 19:15:04.226722 update_engine[1729]: I1002 19:15:04.226663 1729 update_check_scheduler.cc:74] Next update check in 6m54s Oct 2 19:15:04.255908 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:15:04.256294 systemd[1]: Finished motdgen.service. Oct 2 19:15:04.294390 extend-filesystems[1721]: Resized partition /dev/nvme0n1p9 Oct 2 19:15:04.335128 bash[1782]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:15:04.337675 extend-filesystems[1783]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:15:04.341787 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:15:04.356924 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 19:15:04.409937 amazon-ssm-agent[1716]: 2023/10/02 19:15:04 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 19:15:04.423920 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 19:15:04.452921 amazon-ssm-agent[1716]: Initializing new seelog logger Oct 2 19:15:04.452921 amazon-ssm-agent[1716]: New Seelog Logger Creation Complete Oct 2 19:15:04.452921 amazon-ssm-agent[1716]: 2023/10/02 19:15:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:15:04.452921 amazon-ssm-agent[1716]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:15:04.452921 amazon-ssm-agent[1716]: 2023/10/02 19:15:04 processing appconfig overrides Oct 2 19:15:04.456056 extend-filesystems[1783]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 19:15:04.456056 extend-filesystems[1783]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 19:15:04.456056 extend-filesystems[1783]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 19:15:04.477086 extend-filesystems[1721]: Resized filesystem in /dev/nvme0n1p9 Oct 2 19:15:04.462726 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:15:04.463730 systemd[1]: Finished extend-filesystems.service. Oct 2 19:15:04.521154 tar[1732]: ./static Oct 2 19:15:04.563438 systemd-logind[1728]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:15:04.572250 systemd-logind[1728]: New seat seat0. Oct 2 19:15:04.575774 systemd[1]: Started systemd-logind.service. Oct 2 19:15:04.619769 env[1743]: time="2023-10-02T19:15:04.619582986Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:15:04.644610 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:15:04.681367 tar[1732]: ./vlan Oct 2 19:15:04.774749 dbus-daemon[1719]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:15:04.775019 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:15:04.778830 dbus-daemon[1719]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1770 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:15:04.783513 systemd[1]: Starting polkit.service... Oct 2 19:15:04.845985 polkitd[1815]: Started polkitd version 121 Oct 2 19:15:04.850380 env[1743]: time="2023-10-02T19:15:04.850319803Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:15:04.855487 env[1743]: time="2023-10-02T19:15:04.855428371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:04.874378 env[1743]: time="2023-10-02T19:15:04.874206331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:15:04.874578 env[1743]: time="2023-10-02T19:15:04.874546831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:04.875228 env[1743]: time="2023-10-02T19:15:04.875186275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:15:04.875381 env[1743]: time="2023-10-02T19:15:04.875351275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:04.875522 env[1743]: time="2023-10-02T19:15:04.875491579Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:15:04.875700 env[1743]: time="2023-10-02T19:15:04.875659255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:04.876066 env[1743]: time="2023-10-02T19:15:04.876021859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:04.877150 env[1743]: time="2023-10-02T19:15:04.877116763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:04.878774 polkitd[1815]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:15:04.884474 env[1743]: time="2023-10-02T19:15:04.884396203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:15:04.884897 polkitd[1815]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:15:04.885465 env[1743]: time="2023-10-02T19:15:04.885381091Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:15:04.885987 env[1743]: time="2023-10-02T19:15:04.885928831Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:15:04.891973 polkitd[1815]: Finished loading, compiling and executing 2 rules Oct 2 19:15:04.892977 dbus-daemon[1719]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:15:04.893248 systemd[1]: Started polkit.service. Oct 2 19:15:04.893495 env[1743]: time="2023-10-02T19:15:04.893435587Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:15:04.896489 polkitd[1815]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:15:04.904159 env[1743]: time="2023-10-02T19:15:04.904060891Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:15:04.904446 env[1743]: time="2023-10-02T19:15:04.904411819Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:15:04.904595 env[1743]: time="2023-10-02T19:15:04.904564399Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:15:04.904911 env[1743]: time="2023-10-02T19:15:04.904756963Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:15:04.905020 env[1743]: time="2023-10-02T19:15:04.904926331Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:15:04.905020 env[1743]: time="2023-10-02T19:15:04.904964395Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:15:04.905020 env[1743]: time="2023-10-02T19:15:04.904997119Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:15:04.905541 env[1743]: time="2023-10-02T19:15:04.905489935Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:15:04.905631 env[1743]: time="2023-10-02T19:15:04.905547235Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:15:04.905631 env[1743]: time="2023-10-02T19:15:04.905584231Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:15:04.905631 env[1743]: time="2023-10-02T19:15:04.905616535Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:15:04.905777 env[1743]: time="2023-10-02T19:15:04.905649019Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:15:04.905942 env[1743]: time="2023-10-02T19:15:04.905898199Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:15:04.906144 env[1743]: time="2023-10-02T19:15:04.906102955Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:15:04.906749 env[1743]: time="2023-10-02T19:15:04.906706351Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:15:04.906823 env[1743]: time="2023-10-02T19:15:04.906766207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.906823 env[1743]: time="2023-10-02T19:15:04.906799927Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:15:04.907164 env[1743]: time="2023-10-02T19:15:04.907123771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907235 env[1743]: time="2023-10-02T19:15:04.907171663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907235 env[1743]: time="2023-10-02T19:15:04.907205659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907351 env[1743]: time="2023-10-02T19:15:04.907236607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907351 env[1743]: time="2023-10-02T19:15:04.907266787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907351 env[1743]: time="2023-10-02T19:15:04.907296679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907351 env[1743]: time="2023-10-02T19:15:04.907325407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907552 env[1743]: time="2023-10-02T19:15:04.907354135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907552 env[1743]: time="2023-10-02T19:15:04.907388707Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:15:04.907694 env[1743]: time="2023-10-02T19:15:04.907674019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907752 env[1743]: time="2023-10-02T19:15:04.907709899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907752 env[1743]: time="2023-10-02T19:15:04.907739899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.907860 env[1743]: time="2023-10-02T19:15:04.907769287Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:15:04.907860 env[1743]: time="2023-10-02T19:15:04.907801051Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:15:04.907860 env[1743]: time="2023-10-02T19:15:04.907831291Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:15:04.908062 env[1743]: time="2023-10-02T19:15:04.907867627Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:15:04.908062 env[1743]: time="2023-10-02T19:15:04.907954171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:15:04.916900 env[1743]: time="2023-10-02T19:15:04.908314339Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:15:04.918174 env[1743]: time="2023-10-02T19:15:04.916901383Z" level=info msg="Connect containerd service" Oct 2 19:15:04.918174 env[1743]: time="2023-10-02T19:15:04.916980091Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:15:04.922300 tar[1732]: ./portmap Oct 2 19:15:04.931048 env[1743]: time="2023-10-02T19:15:04.930980444Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:15:04.938508 systemd-hostnamed[1770]: Hostname set to (transient) Oct 2 19:15:04.938685 systemd-resolved[1689]: System hostname changed to 'ip-172-31-22-12'. Oct 2 19:15:04.939317 env[1743]: time="2023-10-02T19:15:04.937504592Z" level=info msg="Start subscribing containerd event" Oct 2 19:15:04.939317 env[1743]: time="2023-10-02T19:15:04.938787752Z" level=info msg="Start recovering state" Oct 2 19:15:04.939317 env[1743]: time="2023-10-02T19:15:04.938933528Z" level=info msg="Start event monitor" Oct 2 19:15:04.939317 env[1743]: time="2023-10-02T19:15:04.939084584Z" level=info msg="Start snapshots syncer" Oct 2 19:15:04.939317 env[1743]: time="2023-10-02T19:15:04.939111032Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:15:04.939317 env[1743]: time="2023-10-02T19:15:04.939131972Z" level=info msg="Start streaming server" Oct 2 19:15:04.947975 env[1743]: time="2023-10-02T19:15:04.947907320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:15:04.953548 env[1743]: time="2023-10-02T19:15:04.953480552Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:15:05.006449 systemd[1]: Started containerd.service. Oct 2 19:15:05.008055 env[1743]: time="2023-10-02T19:15:05.007370392Z" level=info msg="containerd successfully booted in 0.401797s" Oct 2 19:15:05.065963 tar[1732]: ./host-local Oct 2 19:15:05.115976 coreos-metadata[1718]: Oct 02 19:15:05.115 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:15:05.121606 coreos-metadata[1718]: Oct 02 19:15:05.121 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 19:15:05.123189 coreos-metadata[1718]: Oct 02 19:15:05.123 INFO Fetch successful Oct 2 19:15:05.123189 coreos-metadata[1718]: Oct 02 19:15:05.123 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:15:05.124298 coreos-metadata[1718]: Oct 02 19:15:05.124 INFO Fetch successful Oct 2 19:15:05.128142 unknown[1718]: wrote ssh authorized keys file for user: core Oct 2 19:15:05.163908 update-ssh-keys[1843]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:15:05.165209 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:15:05.209786 tar[1732]: ./vrf Oct 2 19:15:05.308469 tar[1732]: ./bridge Oct 2 19:15:05.424018 amazon-ssm-agent[1716]: 2023-10-02 19:15:05 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-0a883fec05341492a is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-0a883fec05341492a because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:15:05.424018 amazon-ssm-agent[1716]: status code: 400, request id: 5841a7c7-edfd-4bc8-8cd5-cf3ac74bc59e Oct 2 19:15:05.424654 amazon-ssm-agent[1716]: 2023-10-02 19:15:05 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 19:15:05.442606 tar[1732]: ./tuning Oct 2 19:15:05.548369 tar[1732]: ./firewall Oct 2 19:15:05.695600 tar[1732]: ./host-device Oct 2 19:15:05.847694 tar[1732]: ./sbr Oct 2 19:15:05.966279 tar[1732]: ./loopback Oct 2 19:15:06.078175 tar[1732]: ./dhcp Oct 2 19:15:06.178615 systemd[1]: Finished prepare-critools.service. Oct 2 19:15:06.254705 tar[1732]: ./ptp Oct 2 19:15:06.316960 tar[1732]: ./ipvlan Oct 2 19:15:06.377920 tar[1732]: ./bandwidth Oct 2 19:15:06.462082 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:15:06.548772 locksmithd[1773]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:15:11.214985 sshd_keygen[1751]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:15:11.273445 systemd[1]: Finished sshd-keygen.service. Oct 2 19:15:11.278168 systemd[1]: Starting issuegen.service... Oct 2 19:15:11.297599 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:15:11.297973 systemd[1]: Finished issuegen.service. Oct 2 19:15:11.302758 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:15:11.326035 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:15:11.331962 systemd[1]: Started getty@tty1.service. Oct 2 19:15:11.337271 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:15:11.339515 systemd[1]: Reached target getty.target. Oct 2 19:15:11.341538 systemd[1]: Reached target multi-user.target. Oct 2 19:15:11.346262 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:15:11.370551 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:15:11.371108 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:15:11.373653 systemd[1]: Startup finished in 1.212s (kernel) + 12.804s (initrd) + 16.497s (userspace) = 30.513s. Oct 2 19:15:12.779942 systemd[1]: Created slice system-sshd.slice. Oct 2 19:15:12.782318 systemd[1]: Started sshd@0-172.31.22.12:22-139.178.89.65:37656.service. Oct 2 19:15:12.984697 sshd[1929]: Accepted publickey for core from 139.178.89.65 port 37656 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:12.990000 sshd[1929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:13.008200 systemd[1]: Created slice user-500.slice. Oct 2 19:15:13.010755 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:15:13.018992 systemd-logind[1728]: New session 1 of user core. Oct 2 19:15:13.037693 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:15:13.041800 systemd[1]: Starting user@500.service... Oct 2 19:15:13.054048 (systemd)[1932]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:13.263062 systemd[1932]: Queued start job for default target default.target. Oct 2 19:15:13.265368 systemd[1932]: Reached target paths.target. Oct 2 19:15:13.265609 systemd[1932]: Reached target sockets.target. Oct 2 19:15:13.265758 systemd[1932]: Reached target timers.target. Oct 2 19:15:13.265941 systemd[1932]: Reached target basic.target. Oct 2 19:15:13.266161 systemd[1932]: Reached target default.target. Oct 2 19:15:13.266257 systemd[1]: Started user@500.service. Oct 2 19:15:13.267801 systemd[1932]: Startup finished in 195ms. Oct 2 19:15:13.268208 systemd[1]: Started session-1.scope. Oct 2 19:15:13.432264 systemd[1]: Started sshd@1-172.31.22.12:22-139.178.89.65:37662.service. Oct 2 19:15:13.611907 sshd[1941]: Accepted publickey for core from 139.178.89.65 port 37662 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:13.615856 sshd[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:13.625085 systemd-logind[1728]: New session 2 of user core. Oct 2 19:15:13.625418 systemd[1]: Started session-2.scope. Oct 2 19:15:13.783234 sshd[1941]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:13.789456 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:15:13.790591 systemd[1]: sshd@1-172.31.22.12:22-139.178.89.65:37662.service: Deactivated successfully. Oct 2 19:15:13.792248 systemd-logind[1728]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:15:13.794587 systemd-logind[1728]: Removed session 2. Oct 2 19:15:13.816300 systemd[1]: Started sshd@2-172.31.22.12:22-139.178.89.65:37676.service. Oct 2 19:15:13.996330 sshd[1947]: Accepted publickey for core from 139.178.89.65 port 37676 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:13.999833 sshd[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:14.009364 systemd-logind[1728]: New session 3 of user core. Oct 2 19:15:14.010395 systemd[1]: Started session-3.scope. Oct 2 19:15:14.146231 sshd[1947]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:14.152540 systemd-logind[1728]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:15:14.154061 systemd[1]: sshd@2-172.31.22.12:22-139.178.89.65:37676.service: Deactivated successfully. Oct 2 19:15:14.155596 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:15:14.157330 systemd-logind[1728]: Removed session 3. Oct 2 19:15:14.179324 systemd[1]: Started sshd@3-172.31.22.12:22-139.178.89.65:37682.service. Oct 2 19:15:14.366291 sshd[1953]: Accepted publickey for core from 139.178.89.65 port 37682 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:14.370004 sshd[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:14.380033 systemd-logind[1728]: New session 4 of user core. Oct 2 19:15:14.380662 systemd[1]: Started session-4.scope. Oct 2 19:15:14.535963 sshd[1953]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:14.544096 systemd-logind[1728]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:15:14.544542 systemd[1]: sshd@3-172.31.22.12:22-139.178.89.65:37682.service: Deactivated successfully. Oct 2 19:15:14.545950 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:15:14.547313 systemd-logind[1728]: Removed session 4. Oct 2 19:15:14.567671 systemd[1]: Started sshd@4-172.31.22.12:22-139.178.89.65:37694.service. Oct 2 19:15:14.750430 sshd[1960]: Accepted publickey for core from 139.178.89.65 port 37694 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:14.754066 sshd[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:14.763935 systemd-logind[1728]: New session 5 of user core. Oct 2 19:15:14.764960 systemd[1]: Started session-5.scope. Oct 2 19:15:14.896736 sudo[1963]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:15:14.897281 sudo[1963]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:14.914184 dbus-daemon[1719]: avc: received setenforce notice (enforcing=1) Oct 2 19:15:14.917628 sudo[1963]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:14.945429 sshd[1960]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:14.951247 systemd-logind[1728]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:15:14.951854 systemd[1]: sshd@4-172.31.22.12:22-139.178.89.65:37694.service: Deactivated successfully. Oct 2 19:15:14.953166 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:15:14.954665 systemd-logind[1728]: Removed session 5. Oct 2 19:15:14.975199 systemd[1]: Started sshd@5-172.31.22.12:22-139.178.89.65:37698.service. Oct 2 19:15:15.157961 sshd[1967]: Accepted publickey for core from 139.178.89.65 port 37698 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:15.161435 sshd[1967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:15.171141 systemd[1]: Started session-6.scope. Oct 2 19:15:15.172956 systemd-logind[1728]: New session 6 of user core. Oct 2 19:15:15.294042 sudo[1971]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:15:15.294557 sudo[1971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:15.302182 sudo[1971]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:15.315779 sudo[1970]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:15:15.316757 sudo[1970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:15.340577 systemd[1]: Stopping audit-rules.service... Oct 2 19:15:15.344000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:15:15.347643 kernel: kauditd_printk_skb: 72 callbacks suppressed Oct 2 19:15:15.347704 kernel: audit: type=1305 audit(1696274115.344:161): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:15:15.351220 auditctl[1974]: No rules Oct 2 19:15:15.353389 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:15:15.344000 audit[1974]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc593a200 a2=420 a3=0 items=0 ppid=1 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:15.364653 kernel: audit: type=1300 audit(1696274115.344:161): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc593a200 a2=420 a3=0 items=0 ppid=1 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:15.353753 systemd[1]: Stopped audit-rules.service. Oct 2 19:15:15.361054 systemd[1]: Starting audit-rules.service... Oct 2 19:15:15.344000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:15:15.368731 kernel: audit: type=1327 audit(1696274115.344:161): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:15:15.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.378930 kernel: audit: type=1131 audit(1696274115.352:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.425690 augenrules[1991]: No rules Oct 2 19:15:15.426818 systemd[1]: Finished audit-rules.service. Oct 2 19:15:15.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.435265 sudo[1970]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:15.434000 audit[1970]: USER_END pid=1970 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.445271 kernel: audit: type=1130 audit(1696274115.426:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.445357 kernel: audit: type=1106 audit(1696274115.434:164): pid=1970 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.434000 audit[1970]: CRED_DISP pid=1970 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.454089 kernel: audit: type=1104 audit(1696274115.434:165): pid=1970 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.467484 sshd[1967]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:15.468000 audit[1967]: USER_END pid=1967 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:15.482680 systemd[1]: sshd@5-172.31.22.12:22-139.178.89.65:37698.service: Deactivated successfully. Oct 2 19:15:15.469000 audit[1967]: CRED_DISP pid=1967 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:15.484121 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:15:15.492042 kernel: audit: type=1106 audit(1696274115.468:166): pid=1967 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:15.492209 kernel: audit: type=1104 audit(1696274115.469:167): pid=1967 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:15.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.22.12:22-139.178.89.65:37698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.503012 kernel: audit: type=1131 audit(1696274115.482:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.22.12:22-139.178.89.65:37698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.501932 systemd-logind[1728]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:15:15.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.12:22-139.178.89.65:37710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.506525 systemd[1]: Started sshd@6-172.31.22.12:22-139.178.89.65:37710.service. Oct 2 19:15:15.509083 systemd-logind[1728]: Removed session 6. Oct 2 19:15:15.692000 audit[1997]: USER_ACCT pid=1997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:15.693998 sshd[1997]: Accepted publickey for core from 139.178.89.65 port 37710 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:15.696000 audit[1997]: CRED_ACQ pid=1997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:15.696000 audit[1997]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc278f00 a2=3 a3=1 items=0 ppid=1 pid=1997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:15.696000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:15:15.698223 sshd[1997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:15.706919 systemd[1]: Started session-7.scope. Oct 2 19:15:15.707680 systemd-logind[1728]: New session 7 of user core. Oct 2 19:15:15.715000 audit[1997]: USER_START pid=1997 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:15.722000 audit[1999]: CRED_ACQ pid=1999 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:15.828000 audit[2000]: USER_ACCT pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.829319 sudo[2000]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:15:15.828000 audit[2000]: CRED_REFR pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.829819 sudo[2000]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:15.832000 audit[2000]: USER_START pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:16.507843 systemd[1]: Reloading. Oct 2 19:15:16.703579 /usr/lib/systemd/system-generators/torcx-generator[2029]: time="2023-10-02T19:15:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:15:16.703646 /usr/lib/systemd/system-generators/torcx-generator[2029]: time="2023-10-02T19:15:16Z" level=info msg="torcx already run" Oct 2 19:15:16.946654 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:15:16.946910 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:15:16.984678 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:15:17.147000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.147000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.147000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.147000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.147000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.147000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.147000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.147000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.148000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.148000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.148000 audit: BPF prog-id=37 op=LOAD Oct 2 19:15:17.148000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:15:17.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.150000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.150000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.150000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.150000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.150000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.150000 audit: BPF prog-id=38 op=LOAD Oct 2 19:15:17.151000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:15:17.152000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.153000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.153000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.153000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.153000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.153000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.153000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.153000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.153000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.154000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.154000 audit: BPF prog-id=39 op=LOAD Oct 2 19:15:17.154000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:15:17.156000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.156000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.156000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.156000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.156000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.156000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.156000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.156000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.156000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.157000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.157000 audit: BPF prog-id=40 op=LOAD Oct 2 19:15:17.157000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:15:17.157000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.157000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.157000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.158000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.158000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.158000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.158000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.158000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.158000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.158000 audit: BPF prog-id=41 op=LOAD Oct 2 19:15:17.158000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.158000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.158000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.159000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.159000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.159000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.159000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.159000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.159000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.159000 audit: BPF prog-id=42 op=LOAD Oct 2 19:15:17.160000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:15:17.160000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:15:17.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.163000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.163000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.163000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.163000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.163000 audit: BPF prog-id=43 op=LOAD Oct 2 19:15:17.163000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:15:17.169000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.169000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.169000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.170000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.170000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.170000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.170000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.170000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.170000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.170000 audit: BPF prog-id=44 op=LOAD Oct 2 19:15:17.171000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.171000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.171000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.171000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.171000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.171000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.171000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.171000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.171000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.171000 audit: BPF prog-id=45 op=LOAD Oct 2 19:15:17.172000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:15:17.172000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit: BPF prog-id=46 op=LOAD Oct 2 19:15:17.173000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit: BPF prog-id=47 op=LOAD Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.174000 audit: BPF prog-id=48 op=LOAD Oct 2 19:15:17.174000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:15:17.174000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:15:17.175000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.175000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.175000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.175000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.175000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.175000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.175000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.175000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.175000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit: BPF prog-id=49 op=LOAD Oct 2 19:15:17.176000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit: BPF prog-id=50 op=LOAD Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.176000 audit: BPF prog-id=51 op=LOAD Oct 2 19:15:17.176000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:15:17.176000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:15:17.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit: BPF prog-id=52 op=LOAD Oct 2 19:15:17.180000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit: BPF prog-id=53 op=LOAD Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:17.180000 audit: BPF prog-id=54 op=LOAD Oct 2 19:15:17.180000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:15:17.180000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:15:17.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:17.220805 systemd[1]: Started kubelet.service. Oct 2 19:15:17.260352 systemd[1]: Starting coreos-metadata.service... Oct 2 19:15:17.432837 kubelet[2084]: E1002 19:15:17.432722 2084 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Oct 2 19:15:17.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:15:17.437704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:15:17.438067 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:15:17.458343 coreos-metadata[2092]: Oct 02 19:15:17.458 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:15:17.459425 coreos-metadata[2092]: Oct 02 19:15:17.459 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 19:15:17.460147 coreos-metadata[2092]: Oct 02 19:15:17.460 INFO Fetch successful Oct 2 19:15:17.460234 coreos-metadata[2092]: Oct 02 19:15:17.460 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 19:15:17.460807 coreos-metadata[2092]: Oct 02 19:15:17.460 INFO Fetch successful Oct 2 19:15:17.460943 coreos-metadata[2092]: Oct 02 19:15:17.460 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 19:15:17.461526 coreos-metadata[2092]: Oct 02 19:15:17.461 INFO Fetch successful Oct 2 19:15:17.461606 coreos-metadata[2092]: Oct 02 19:15:17.461 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 19:15:17.462190 coreos-metadata[2092]: Oct 02 19:15:17.462 INFO Fetch successful Oct 2 19:15:17.462272 coreos-metadata[2092]: Oct 02 19:15:17.462 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 19:15:17.462829 coreos-metadata[2092]: Oct 02 19:15:17.462 INFO Fetch successful Oct 2 19:15:17.462947 coreos-metadata[2092]: Oct 02 19:15:17.462 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 19:15:17.463576 coreos-metadata[2092]: Oct 02 19:15:17.463 INFO Fetch successful Oct 2 19:15:17.463656 coreos-metadata[2092]: Oct 02 19:15:17.463 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 19:15:17.464226 coreos-metadata[2092]: Oct 02 19:15:17.464 INFO Fetch successful Oct 2 19:15:17.464331 coreos-metadata[2092]: Oct 02 19:15:17.464 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 19:15:17.465065 coreos-metadata[2092]: Oct 02 19:15:17.465 INFO Fetch successful Oct 2 19:15:17.488050 systemd[1]: Finished coreos-metadata.service. Oct 2 19:15:17.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:18.140228 systemd[1]: Stopped kubelet.service. Oct 2 19:15:18.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:18.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:18.185693 systemd[1]: Reloading. Oct 2 19:15:18.382583 /usr/lib/systemd/system-generators/torcx-generator[2148]: time="2023-10-02T19:15:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:15:18.382643 /usr/lib/systemd/system-generators/torcx-generator[2148]: time="2023-10-02T19:15:18Z" level=info msg="torcx already run" Oct 2 19:15:18.614268 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:15:18.614472 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:15:18.653386 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:15:18.815000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.815000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.815000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.816000 audit: BPF prog-id=55 op=LOAD Oct 2 19:15:18.817000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:15:18.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.818000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.818000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.818000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.818000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.818000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.818000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.818000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.818000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.819000 audit: BPF prog-id=56 op=LOAD Oct 2 19:15:18.819000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:15:18.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.822000 audit: BPF prog-id=57 op=LOAD Oct 2 19:15:18.822000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:15:18.824000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.824000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.824000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.824000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.824000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.825000 audit: BPF prog-id=58 op=LOAD Oct 2 19:15:18.825000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:15:18.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.827000 audit: BPF prog-id=59 op=LOAD Oct 2 19:15:18.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.827000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.828000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.828000 audit: BPF prog-id=60 op=LOAD Oct 2 19:15:18.828000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:15:18.828000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:15:18.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.831000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.831000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.832000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.832000 audit: BPF prog-id=61 op=LOAD Oct 2 19:15:18.832000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:15:18.838000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.838000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.838000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.838000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.838000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.838000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.838000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.838000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.839000 audit: BPF prog-id=62 op=LOAD Oct 2 19:15:18.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.840000 audit: BPF prog-id=63 op=LOAD Oct 2 19:15:18.840000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:15:18.840000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:15:18.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.843000 audit: BPF prog-id=64 op=LOAD Oct 2 19:15:18.843000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:15:18.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.844000 audit: BPF prog-id=65 op=LOAD Oct 2 19:15:18.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.845000 audit: BPF prog-id=66 op=LOAD Oct 2 19:15:18.845000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:15:18.846000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:15:18.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.847000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.847000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.849000 audit: BPF prog-id=67 op=LOAD Oct 2 19:15:18.849000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:15:18.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.850000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.850000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.850000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.850000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.850000 audit: BPF prog-id=68 op=LOAD Oct 2 19:15:18.850000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.850000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.850000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.850000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.851000 audit: BPF prog-id=69 op=LOAD Oct 2 19:15:18.851000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:15:18.851000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:15:18.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.856000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.856000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.856000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.856000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.856000 audit: BPF prog-id=70 op=LOAD Oct 2 19:15:18.857000 audit: BPF prog-id=52 op=UNLOAD Oct 2 19:15:18.857000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.857000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.857000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.858000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.858000 audit: BPF prog-id=71 op=LOAD Oct 2 19:15:18.858000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.858000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.858000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.858000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.858000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.858000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.858000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:18.859000 audit: BPF prog-id=72 op=LOAD Oct 2 19:15:18.859000 audit: BPF prog-id=53 op=UNLOAD Oct 2 19:15:18.859000 audit: BPF prog-id=54 op=UNLOAD Oct 2 19:15:18.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:18.908208 systemd[1]: Started kubelet.service. Oct 2 19:15:19.053222 kubelet[2203]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:15:19.053222 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:15:19.053783 kubelet[2203]: I1002 19:15:19.053549 2203 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:15:19.056290 kubelet[2203]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:15:19.056290 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:15:20.057704 kubelet[2203]: I1002 19:15:20.057665 2203 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Oct 2 19:15:20.058418 kubelet[2203]: I1002 19:15:20.058393 2203 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:15:20.059073 kubelet[2203]: I1002 19:15:20.059047 2203 server.go:836] "Client rotation is on, will bootstrap in background" Oct 2 19:15:20.067906 kubelet[2203]: W1002 19:15:20.067840 2203 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:15:20.069631 kubelet[2203]: I1002 19:15:20.069577 2203 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:15:20.070207 kubelet[2203]: I1002 19:15:20.070182 2203 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:15:20.071129 kubelet[2203]: I1002 19:15:20.071104 2203 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:15:20.071379 kubelet[2203]: I1002 19:15:20.071357 2203 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:15:20.071696 kubelet[2203]: I1002 19:15:20.071672 2203 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:15:20.071820 kubelet[2203]: I1002 19:15:20.071800 2203 container_manager_linux.go:308] "Creating device plugin manager" Oct 2 19:15:20.072187 kubelet[2203]: I1002 19:15:20.072164 2203 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:15:20.080698 kubelet[2203]: I1002 19:15:20.080663 2203 kubelet.go:398] "Attempting to sync node with API server" Oct 2 19:15:20.080952 kubelet[2203]: I1002 19:15:20.080931 2203 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:15:20.081183 kubelet[2203]: I1002 19:15:20.081161 2203 kubelet.go:297] "Adding apiserver pod source" Oct 2 19:15:20.081308 kubelet[2203]: I1002 19:15:20.081286 2203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:15:20.081995 kubelet[2203]: E1002 19:15:20.081956 2203 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:20.082260 kubelet[2203]: E1002 19:15:20.082223 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:20.083496 kubelet[2203]: I1002 19:15:20.083433 2203 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:15:20.085802 kubelet[2203]: W1002 19:15:20.085768 2203 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:15:20.086946 kubelet[2203]: I1002 19:15:20.086915 2203 server.go:1186] "Started kubelet" Oct 2 19:15:20.090378 kubelet[2203]: I1002 19:15:20.090344 2203 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:15:20.089000 audit[2203]: AVC avc: denied { mac_admin } for pid=2203 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:20.089000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:15:20.089000 audit[2203]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c8f9e0 a1=4000cb09a8 a2=4000c8f9b0 a3=25 items=0 ppid=1 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.089000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:15:20.094238 kubelet[2203]: E1002 19:15:20.091108 2203 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:15:20.094412 kubelet[2203]: E1002 19:15:20.094288 2203 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:15:20.094412 kubelet[2203]: I1002 19:15:20.091593 2203 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:15:20.093000 audit[2203]: AVC avc: denied { mac_admin } for pid=2203 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:20.093000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:15:20.093000 audit[2203]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000733c20 a1=400073f908 a2=4000e56ba0 a3=25 items=0 ppid=1 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.093000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:15:20.097078 kubelet[2203]: I1002 19:15:20.097046 2203 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:15:20.097587 kubelet[2203]: I1002 19:15:20.097538 2203 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:15:20.099685 kubelet[2203]: I1002 19:15:20.099648 2203 server.go:451] "Adding debug handlers to kubelet server" Oct 2 19:15:20.112084 kubelet[2203]: E1002 19:15:20.111866 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605331773137", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 86860087, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 86860087, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.112508 kubelet[2203]: W1002 19:15:20.112466 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.22.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:20.112634 kubelet[2203]: E1002 19:15:20.112525 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.22.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:20.112704 kubelet[2203]: W1002 19:15:20.112659 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:20.112704 kubelet[2203]: E1002 19:15:20.112695 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:20.115449 kubelet[2203]: I1002 19:15:20.115410 2203 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:15:20.119058 kubelet[2203]: I1002 19:15:20.119021 2203 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:15:20.123960 kubelet[2203]: E1002 19:15:20.123758 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605331e81857", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 94259287, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 94259287, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.124241 kubelet[2203]: E1002 19:15:20.124015 2203 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.22.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:20.124241 kubelet[2203]: W1002 19:15:20.124104 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:20.124241 kubelet[2203]: E1002 19:15:20.124148 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:20.158000 audit[2216]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.158000 audit[2216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff16b0760 a2=0 a3=1 items=0 ppid=2203 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:15:20.163000 audit[2222]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.163000 audit[2222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffff7fb14b0 a2=0 a3=1 items=0 ppid=2203 pid=2222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.163000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:15:20.180278 kubelet[2203]: I1002 19:15:20.180243 2203 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:15:20.180514 kubelet[2203]: E1002 19:15:20.180375 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f111b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178733491, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178733491, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.180822 kubelet[2203]: I1002 19:15:20.180480 2203 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:15:20.180822 kubelet[2203]: I1002 19:15:20.180786 2203 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:15:20.182093 kubelet[2203]: E1002 19:15:20.181754 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f176c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178759363, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178759363, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.186011 kubelet[2203]: E1002 19:15:20.183442 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f18a5b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178764379, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178764379, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.186672 kubelet[2203]: I1002 19:15:20.186607 2203 policy_none.go:49] "None policy: Start" Oct 2 19:15:20.188100 kubelet[2203]: I1002 19:15:20.188051 2203 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:15:20.188280 kubelet[2203]: I1002 19:15:20.188133 2203 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:15:20.201060 systemd[1]: Created slice kubepods.slice. Oct 2 19:15:20.210971 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:15:20.171000 audit[2224]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.171000 audit[2224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe7563270 a2=0 a3=1 items=0 ppid=2203 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.171000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:15:20.217934 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:15:20.223000 audit[2229]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.223000 audit[2229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffe3f2630 a2=0 a3=1 items=0 ppid=2203 pid=2229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:15:20.238505 kubelet[2203]: I1002 19:15:20.238431 2203 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.12" Oct 2 19:15:20.240552 kubelet[2203]: E1002 19:15:20.240504 2203 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.12" Oct 2 19:15:20.241515 kubelet[2203]: E1002 19:15:20.241365 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f111b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178733491, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 238375760, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f111b3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.243283 kubelet[2203]: E1002 19:15:20.243168 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f176c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178759363, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 238383812, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f176c3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.244963 kubelet[2203]: E1002 19:15:20.244774 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f18a5b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178764379, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 238388684, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f18a5b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.246215 kubelet[2203]: I1002 19:15:20.246171 2203 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:15:20.245000 audit[2203]: AVC avc: denied { mac_admin } for pid=2203 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:20.245000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:15:20.245000 audit[2203]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ce2570 a1=4000b66fd8 a2=4000ce2540 a3=25 items=0 ppid=1 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.245000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:15:20.246661 kubelet[2203]: I1002 19:15:20.246337 2203 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:15:20.246661 kubelet[2203]: I1002 19:15:20.246632 2203 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:15:20.248251 kubelet[2203]: E1002 19:15:20.248218 2203 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.22.12\" not found" Oct 2 19:15:20.253191 kubelet[2203]: E1002 19:15:20.253037 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a60533b2a0424", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 249574436, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 249574436, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.296000 audit[2234]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.296000 audit[2234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffddf34250 a2=0 a3=1 items=0 ppid=2203 pid=2234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.296000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:15:20.300000 audit[2235]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.300000 audit[2235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc6c5ae70 a2=0 a3=1 items=0 ppid=2203 pid=2235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:15:20.314000 audit[2238]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.314000 audit[2238]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff34957f0 a2=0 a3=1 items=0 ppid=2203 pid=2238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.314000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:15:20.326059 kubelet[2203]: E1002 19:15:20.326022 2203 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.22.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:20.330000 audit[2241]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.330000 audit[2241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffffb8313f0 a2=0 a3=1 items=0 ppid=2203 pid=2241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:15:20.334000 audit[2242]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.334000 audit[2242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffef0c3200 a2=0 a3=1 items=0 ppid=2203 pid=2242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:15:20.339000 audit[2243]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.339000 audit[2243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7b61840 a2=0 a3=1 items=0 ppid=2203 pid=2243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:15:20.353915 kernel: kauditd_printk_skb: 471 callbacks suppressed Oct 2 19:15:20.354059 kernel: audit: type=1325 audit(1696274120.348:609): table=nat:12 family=2 entries=1 op=nft_register_rule pid=2245 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.348000 audit[2245]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2245 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.348000 audit[2245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffde032f10 a2=0 a3=1 items=0 ppid=2203 pid=2245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.368374 kernel: audit: type=1300 audit(1696274120.348:609): arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffde032f10 a2=0 a3=1 items=0 ppid=2203 pid=2245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:15:20.376966 kernel: audit: type=1327 audit(1696274120.348:609): proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:15:20.362000 audit[2247]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.362000 audit[2247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe1e0a9e0 a2=0 a3=1 items=0 ppid=2203 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.416155 kernel: audit: type=1325 audit(1696274120.362:610): table=nat:13 family=2 entries=2 op=nft_register_chain pid=2247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.416281 kernel: audit: type=1300 audit(1696274120.362:610): arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe1e0a9e0 a2=0 a3=1 items=0 ppid=2203 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.362000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:15:20.426028 kernel: audit: type=1327 audit(1696274120.362:610): proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:15:20.419000 audit[2250]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.431724 kernel: audit: type=1325 audit(1696274120.419:611): table=nat:14 family=2 entries=1 op=nft_register_rule pid=2250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.419000 audit[2250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffe72bdcd0 a2=0 a3=1 items=0 ppid=2203 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:15:20.452114 kernel: audit: type=1300 audit(1696274120.419:611): arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffe72bdcd0 a2=0 a3=1 items=0 ppid=2203 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.452266 kernel: audit: type=1327 audit(1696274120.419:611): proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:15:20.452327 kubelet[2203]: I1002 19:15:20.443961 2203 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.12" Oct 2 19:15:20.428000 audit[2252]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.458832 kernel: audit: type=1325 audit(1696274120.428:612): table=nat:15 family=2 entries=1 op=nft_register_rule pid=2252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.458960 kubelet[2203]: E1002 19:15:20.452786 2203 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.12" Oct 2 19:15:20.458960 kubelet[2203]: E1002 19:15:20.452892 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f111b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178733491, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 443903361, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f111b3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.428000 audit[2252]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff9bacea0 a2=0 a3=1 items=0 ppid=2203 pid=2252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:15:20.459783 kubelet[2203]: E1002 19:15:20.459663 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f176c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178759363, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 443914389, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f176c3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.460000 audit[2255]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.460000 audit[2255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffff8f70f40 a2=0 a3=1 items=0 ppid=2203 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:15:20.461792 kubelet[2203]: I1002 19:15:20.461744 2203 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:15:20.464000 audit[2256]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=2256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.464000 audit[2256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffeaff0c0 a2=0 a3=1 items=0 ppid=2203 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:15:20.465000 audit[2257]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=2257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.465000 audit[2257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeccf4130 a2=0 a3=1 items=0 ppid=2203 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.465000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:15:20.468000 audit[2258]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=2258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.468000 audit[2258]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe2ac78e0 a2=0 a3=1 items=0 ppid=2203 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:15:20.469000 audit[2259]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=2259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.469000 audit[2259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4d09f90 a2=0 a3=1 items=0 ppid=2203 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:15:20.474000 audit[2260]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=2260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:20.474000 audit[2260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff5f88480 a2=0 a3=1 items=0 ppid=2203 pid=2260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:15:20.477000 audit[2262]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=2262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.477000 audit[2262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc5594c30 a2=0 a3=1 items=0 ppid=2203 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:15:20.481000 audit[2263]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=2263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.481000 audit[2263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffc8091560 a2=0 a3=1 items=0 ppid=2203 pid=2263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:15:20.490977 kubelet[2203]: E1002 19:15:20.490821 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f18a5b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178764379, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 443922273, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f18a5b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.490000 audit[2265]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.490000 audit[2265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffcdc748c0 a2=0 a3=1 items=0 ppid=2203 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:15:20.494000 audit[2266]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.494000 audit[2266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd7652150 a2=0 a3=1 items=0 ppid=2203 pid=2266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:15:20.498000 audit[2267]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.498000 audit[2267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7327160 a2=0 a3=1 items=0 ppid=2203 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:15:20.506000 audit[2269]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.506000 audit[2269]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffeb434090 a2=0 a3=1 items=0 ppid=2203 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:15:20.514000 audit[2271]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.514000 audit[2271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc0355350 a2=0 a3=1 items=0 ppid=2203 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.514000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:15:20.522000 audit[2273]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.522000 audit[2273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffcb6c6420 a2=0 a3=1 items=0 ppid=2203 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:15:20.530000 audit[2275]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.530000 audit[2275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffffd4a47f0 a2=0 a3=1 items=0 ppid=2203 pid=2275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:15:20.540000 audit[2277]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.540000 audit[2277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffd70962c0 a2=0 a3=1 items=0 ppid=2203 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:15:20.542416 kubelet[2203]: I1002 19:15:20.542378 2203 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:15:20.542416 kubelet[2203]: I1002 19:15:20.542422 2203 status_manager.go:176] "Starting to sync pod status with apiserver" Oct 2 19:15:20.542608 kubelet[2203]: I1002 19:15:20.542463 2203 kubelet.go:2113] "Starting kubelet main sync loop" Oct 2 19:15:20.542608 kubelet[2203]: E1002 19:15:20.542560 2203 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:15:20.544469 kubelet[2203]: W1002 19:15:20.544436 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:20.544672 kubelet[2203]: E1002 19:15:20.544650 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:20.545000 audit[2278]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.545000 audit[2278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcaeacb50 a2=0 a3=1 items=0 ppid=2203 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:15:20.549000 audit[2279]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.549000 audit[2279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeea4ea90 a2=0 a3=1 items=0 ppid=2203 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:15:20.552000 audit[2280]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:20.552000 audit[2280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe3151fb0 a2=0 a3=1 items=0 ppid=2203 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:20.552000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:15:20.727910 kubelet[2203]: E1002 19:15:20.727847 2203 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.22.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:20.854603 kubelet[2203]: I1002 19:15:20.854547 2203 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.12" Oct 2 19:15:20.856512 kubelet[2203]: E1002 19:15:20.856219 2203 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.12" Oct 2 19:15:20.856512 kubelet[2203]: E1002 19:15:20.856211 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f111b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178733491, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 854491547, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f111b3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.891360 kubelet[2203]: E1002 19:15:20.891090 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f176c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178759363, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 854499527, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f176c3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:21.082546 kubelet[2203]: E1002 19:15:21.082368 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:21.090990 kubelet[2203]: E1002 19:15:21.090837 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f18a5b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178764379, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 854504291, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f18a5b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:21.299293 kubelet[2203]: W1002 19:15:21.299212 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:21.299293 kubelet[2203]: E1002 19:15:21.299258 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:21.311770 kubelet[2203]: W1002 19:15:21.311699 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.22.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:21.311770 kubelet[2203]: E1002 19:15:21.311738 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.22.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:21.524359 kubelet[2203]: W1002 19:15:21.524291 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:21.524359 kubelet[2203]: E1002 19:15:21.524358 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:21.529675 kubelet[2203]: E1002 19:15:21.529590 2203 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.22.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:21.657961 kubelet[2203]: I1002 19:15:21.657914 2203 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.12" Oct 2 19:15:21.659820 kubelet[2203]: E1002 19:15:21.659775 2203 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.12" Oct 2 19:15:21.660008 kubelet[2203]: E1002 19:15:21.659801 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f111b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178733491, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 21, 657793787, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f111b3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:21.661516 kubelet[2203]: E1002 19:15:21.661413 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f176c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178759363, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 21, 657801551, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f176c3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:21.690361 kubelet[2203]: E1002 19:15:21.690229 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f18a5b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178764379, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 21, 657817415, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f18a5b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:21.897858 kubelet[2203]: W1002 19:15:21.897815 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:21.898054 kubelet[2203]: E1002 19:15:21.897865 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:22.083078 kubelet[2203]: E1002 19:15:22.082997 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:23.083830 kubelet[2203]: E1002 19:15:23.083774 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:23.131048 kubelet[2203]: E1002 19:15:23.130994 2203 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.22.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:23.261543 kubelet[2203]: I1002 19:15:23.261510 2203 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.12" Oct 2 19:15:23.266454 kubelet[2203]: E1002 19:15:23.266317 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f111b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178733491, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 23, 261416459, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f111b3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:23.270251 kubelet[2203]: E1002 19:15:23.270222 2203 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.12" Oct 2 19:15:23.276631 kubelet[2203]: W1002 19:15:23.276589 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.22.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:23.276738 kubelet[2203]: E1002 19:15:23.276634 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.22.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:23.276952 kubelet[2203]: E1002 19:15:23.276576 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f176c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178759363, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 23, 261467315, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f176c3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:23.278910 kubelet[2203]: E1002 19:15:23.278774 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f18a5b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178764379, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 23, 261473555, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f18a5b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:23.499134 kubelet[2203]: W1002 19:15:23.499095 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:23.499349 kubelet[2203]: E1002 19:15:23.499326 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:23.950007 kubelet[2203]: W1002 19:15:23.949959 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:23.950251 kubelet[2203]: E1002 19:15:23.950228 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:23.978723 kubelet[2203]: W1002 19:15:23.978676 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:23.978975 kubelet[2203]: E1002 19:15:23.978953 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:24.084635 kubelet[2203]: E1002 19:15:24.084591 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:25.085816 kubelet[2203]: E1002 19:15:25.085774 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:26.087630 kubelet[2203]: E1002 19:15:26.087558 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:26.333015 kubelet[2203]: E1002 19:15:26.332963 2203 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.22.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:26.471837 kubelet[2203]: I1002 19:15:26.471804 2203 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.12" Oct 2 19:15:26.474052 kubelet[2203]: E1002 19:15:26.474019 2203 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.12" Oct 2 19:15:26.474431 kubelet[2203]: E1002 19:15:26.474268 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f111b3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178733491, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 26, 471738843, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f111b3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:26.475995 kubelet[2203]: E1002 19:15:26.475896 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f176c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178759363, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 26, 471758103, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f176c3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:26.477744 kubelet[2203]: E1002 19:15:26.477643 2203 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.12.178a605336f18a5b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.12", UID:"172.31.22.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 178764379, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 26, 471762819, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.12.178a605336f18a5b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:27.088043 kubelet[2203]: E1002 19:15:27.087971 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:27.262888 kubelet[2203]: W1002 19:15:27.262833 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:27.263072 kubelet[2203]: E1002 19:15:27.262912 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:27.798751 kubelet[2203]: W1002 19:15:27.798714 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:27.799004 kubelet[2203]: E1002 19:15:27.798982 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:28.089048 kubelet[2203]: E1002 19:15:28.088699 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:29.090698 kubelet[2203]: E1002 19:15:29.090654 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:29.552360 kubelet[2203]: W1002 19:15:29.552303 2203 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.22.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:29.552360 kubelet[2203]: E1002 19:15:29.552358 2203 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.22.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:30.068293 kubelet[2203]: I1002 19:15:30.068220 2203 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:15:30.092039 kubelet[2203]: E1002 19:15:30.091990 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:30.249334 kubelet[2203]: E1002 19:15:30.249299 2203 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.22.12\" not found" Oct 2 19:15:30.468214 kubelet[2203]: E1002 19:15:30.468179 2203 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.22.12" not found Oct 2 19:15:31.092598 kubelet[2203]: E1002 19:15:31.092526 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:31.716619 kubelet[2203]: E1002 19:15:31.716563 2203 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.22.12" not found Oct 2 19:15:32.092974 kubelet[2203]: E1002 19:15:32.092657 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:32.739624 kubelet[2203]: E1002 19:15:32.739568 2203 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.22.12\" not found" node="172.31.22.12" Oct 2 19:15:32.876223 kubelet[2203]: I1002 19:15:32.876171 2203 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.12" Oct 2 19:15:33.094936 kubelet[2203]: E1002 19:15:33.094575 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:33.118471 kubelet[2203]: I1002 19:15:33.118413 2203 kubelet_node_status.go:73] "Successfully registered node" node="172.31.22.12" Oct 2 19:15:33.139460 kubelet[2203]: E1002 19:15:33.139412 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:33.240074 kubelet[2203]: E1002 19:15:33.240003 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:33.341142 kubelet[2203]: E1002 19:15:33.341092 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:33.438359 sudo[2000]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:33.437000 audit[2000]: USER_END pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:33.440959 kernel: kauditd_printk_skb: 59 callbacks suppressed Oct 2 19:15:33.441022 kernel: audit: type=1106 audit(1696274133.437:632): pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:33.441260 kubelet[2203]: E1002 19:15:33.441228 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:33.437000 audit[2000]: CRED_DISP pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:33.458293 kernel: audit: type=1104 audit(1696274133.437:633): pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:33.464190 sshd[1997]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:33.464000 audit[1997]: USER_END pid=1997 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:33.464000 audit[1997]: CRED_DISP pid=1997 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:33.480340 systemd-logind[1728]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:15:33.482140 systemd[1]: sshd@6-172.31.22.12:22-139.178.89.65:37710.service: Deactivated successfully. Oct 2 19:15:33.483505 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:15:33.485993 systemd-logind[1728]: Removed session 7. Oct 2 19:15:33.487982 kernel: audit: type=1106 audit(1696274133.464:634): pid=1997 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:33.488080 kernel: audit: type=1104 audit(1696274133.464:635): pid=1997 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:33.488232 kernel: audit: type=1131 audit(1696274133.481:636): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.12:22-139.178.89.65:37710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:33.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.12:22-139.178.89.65:37710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:33.541636 kubelet[2203]: E1002 19:15:33.541531 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:33.642011 kubelet[2203]: E1002 19:15:33.641941 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:33.744041 kubelet[2203]: E1002 19:15:33.742937 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:33.844143 kubelet[2203]: E1002 19:15:33.844089 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:33.944847 kubelet[2203]: E1002 19:15:33.944791 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.045825 kubelet[2203]: E1002 19:15:34.045474 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.095420 kubelet[2203]: E1002 19:15:34.095345 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:34.145920 kubelet[2203]: E1002 19:15:34.145854 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.246554 kubelet[2203]: E1002 19:15:34.246488 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.347744 kubelet[2203]: E1002 19:15:34.347328 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.448070 kubelet[2203]: E1002 19:15:34.448010 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.548824 kubelet[2203]: E1002 19:15:34.548771 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.649950 kubelet[2203]: E1002 19:15:34.649869 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.750597 kubelet[2203]: E1002 19:15:34.750540 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.851208 kubelet[2203]: E1002 19:15:34.851148 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.952182 kubelet[2203]: E1002 19:15:34.951813 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:34.965302 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:15:34.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:34.973932 kernel: audit: type=1131 audit(1696274134.964:637): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:34.995000 audit: BPF prog-id=60 op=UNLOAD Oct 2 19:15:34.995000 audit: BPF prog-id=59 op=UNLOAD Oct 2 19:15:35.001243 kernel: audit: type=1334 audit(1696274134.995:638): prog-id=60 op=UNLOAD Oct 2 19:15:35.001322 kernel: audit: type=1334 audit(1696274134.995:639): prog-id=59 op=UNLOAD Oct 2 19:15:35.001378 kernel: audit: type=1334 audit(1696274134.995:640): prog-id=58 op=UNLOAD Oct 2 19:15:34.995000 audit: BPF prog-id=58 op=UNLOAD Oct 2 19:15:35.052622 kubelet[2203]: E1002 19:15:35.052579 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:35.096263 kubelet[2203]: E1002 19:15:35.096204 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:35.153953 kubelet[2203]: E1002 19:15:35.153893 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:35.254768 kubelet[2203]: E1002 19:15:35.254650 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:35.355816 kubelet[2203]: E1002 19:15:35.355772 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:35.456599 kubelet[2203]: E1002 19:15:35.456554 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:35.557599 kubelet[2203]: E1002 19:15:35.557471 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:35.658738 kubelet[2203]: E1002 19:15:35.658688 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:35.759324 kubelet[2203]: E1002 19:15:35.759279 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:35.860575 kubelet[2203]: E1002 19:15:35.859962 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:35.960635 kubelet[2203]: E1002 19:15:35.960568 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:36.061237 kubelet[2203]: E1002 19:15:36.061178 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:36.096853 kubelet[2203]: E1002 19:15:36.096825 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:36.161742 kubelet[2203]: E1002 19:15:36.161589 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:36.262590 kubelet[2203]: E1002 19:15:36.262526 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:36.363282 kubelet[2203]: E1002 19:15:36.363234 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:36.464071 kubelet[2203]: E1002 19:15:36.463927 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:36.564845 kubelet[2203]: E1002 19:15:36.564776 2203 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.12\" not found" Oct 2 19:15:36.666422 kubelet[2203]: I1002 19:15:36.666390 2203 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:15:36.667294 env[1743]: time="2023-10-02T19:15:36.667202744Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:15:36.668107 kubelet[2203]: I1002 19:15:36.668078 2203 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:15:37.092428 kubelet[2203]: I1002 19:15:37.092395 2203 apiserver.go:52] "Watching apiserver" Oct 2 19:15:37.096246 kubelet[2203]: I1002 19:15:37.096194 2203 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:15:37.096436 kubelet[2203]: I1002 19:15:37.096309 2203 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:15:37.098220 kubelet[2203]: E1002 19:15:37.098187 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:37.106731 systemd[1]: Created slice kubepods-besteffort-podfd827912_9cad_4b1b_8f39_e51ea9d32d2f.slice. Oct 2 19:15:37.120922 kubelet[2203]: I1002 19:15:37.120867 2203 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:15:37.126964 systemd[1]: Created slice kubepods-burstable-podec8678b8_87b9_47df_9317_82b9208c54aa.slice. Oct 2 19:15:37.214213 kubelet[2203]: I1002 19:15:37.214143 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk8rm\" (UniqueName: \"kubernetes.io/projected/ec8678b8-87b9-47df-9317-82b9208c54aa-kube-api-access-mk8rm\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.214406 kubelet[2203]: I1002 19:15:37.214253 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd827912-9cad-4b1b-8f39-e51ea9d32d2f-kube-proxy\") pod \"kube-proxy-fnjn4\" (UID: \"fd827912-9cad-4b1b-8f39-e51ea9d32d2f\") " pod="kube-system/kube-proxy-fnjn4" Oct 2 19:15:37.214406 kubelet[2203]: I1002 19:15:37.214328 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-run\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.214406 kubelet[2203]: I1002 19:15:37.214401 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-hostproc\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.214576 kubelet[2203]: I1002 19:15:37.214451 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-config-path\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.214576 kubelet[2203]: I1002 19:15:37.214522 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd827912-9cad-4b1b-8f39-e51ea9d32d2f-lib-modules\") pod \"kube-proxy-fnjn4\" (UID: \"fd827912-9cad-4b1b-8f39-e51ea9d32d2f\") " pod="kube-system/kube-proxy-fnjn4" Oct 2 19:15:37.214702 kubelet[2203]: I1002 19:15:37.214594 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-cgroup\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.214702 kubelet[2203]: I1002 19:15:37.214665 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cni-path\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.214813 kubelet[2203]: I1002 19:15:37.214716 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec8678b8-87b9-47df-9317-82b9208c54aa-hubble-tls\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.214813 kubelet[2203]: I1002 19:15:37.214786 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd827912-9cad-4b1b-8f39-e51ea9d32d2f-xtables-lock\") pod \"kube-proxy-fnjn4\" (UID: \"fd827912-9cad-4b1b-8f39-e51ea9d32d2f\") " pod="kube-system/kube-proxy-fnjn4" Oct 2 19:15:37.214975 kubelet[2203]: I1002 19:15:37.214854 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blpll\" (UniqueName: \"kubernetes.io/projected/fd827912-9cad-4b1b-8f39-e51ea9d32d2f-kube-api-access-blpll\") pod \"kube-proxy-fnjn4\" (UID: \"fd827912-9cad-4b1b-8f39-e51ea9d32d2f\") " pod="kube-system/kube-proxy-fnjn4" Oct 2 19:15:37.214975 kubelet[2203]: I1002 19:15:37.214956 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-host-proc-sys-kernel\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.215110 kubelet[2203]: I1002 19:15:37.215046 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec8678b8-87b9-47df-9317-82b9208c54aa-clustermesh-secrets\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.215173 kubelet[2203]: I1002 19:15:37.215117 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-host-proc-sys-net\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.215233 kubelet[2203]: I1002 19:15:37.215187 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-bpf-maps\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.215298 kubelet[2203]: I1002 19:15:37.215232 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-etc-cni-netd\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.215368 kubelet[2203]: I1002 19:15:37.215303 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-lib-modules\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.215429 kubelet[2203]: I1002 19:15:37.215371 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-xtables-lock\") pod \"cilium-zlqcm\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " pod="kube-system/cilium-zlqcm" Oct 2 19:15:37.215429 kubelet[2203]: I1002 19:15:37.215399 2203 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:15:37.423871 env[1743]: time="2023-10-02T19:15:37.422835587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fnjn4,Uid:fd827912-9cad-4b1b-8f39-e51ea9d32d2f,Namespace:kube-system,Attempt:0,}" Oct 2 19:15:37.741904 env[1743]: time="2023-10-02T19:15:37.741724691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlqcm,Uid:ec8678b8-87b9-47df-9317-82b9208c54aa,Namespace:kube-system,Attempt:0,}" Oct 2 19:15:37.986282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616134521.mount: Deactivated successfully. Oct 2 19:15:37.994094 env[1743]: time="2023-10-02T19:15:37.993939419Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:37.996090 env[1743]: time="2023-10-02T19:15:37.996022006Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:38.000757 env[1743]: time="2023-10-02T19:15:38.000699030Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:38.002784 env[1743]: time="2023-10-02T19:15:38.002734733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:38.006980 env[1743]: time="2023-10-02T19:15:38.006930405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:38.009965 env[1743]: time="2023-10-02T19:15:38.009910933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:38.012221 env[1743]: time="2023-10-02T19:15:38.012158329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:38.015567 env[1743]: time="2023-10-02T19:15:38.015507630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:38.062092 env[1743]: time="2023-10-02T19:15:38.061952078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:15:38.062363 env[1743]: time="2023-10-02T19:15:38.062307087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:15:38.062633 env[1743]: time="2023-10-02T19:15:38.062573314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:15:38.062928 env[1743]: time="2023-10-02T19:15:38.062779884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:15:38.063037 env[1743]: time="2023-10-02T19:15:38.062939903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:15:38.063134 env[1743]: time="2023-10-02T19:15:38.062969365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:15:38.063632 env[1743]: time="2023-10-02T19:15:38.063486505Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/155107565c768041c38ebd72122d4dc0d4b92ad39745b1eef0c18296572b11e7 pid=2305 runtime=io.containerd.runc.v2 Oct 2 19:15:38.063954 env[1743]: time="2023-10-02T19:15:38.063852843Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c pid=2306 runtime=io.containerd.runc.v2 Oct 2 19:15:38.097309 systemd[1]: Started cri-containerd-1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c.scope. Oct 2 19:15:38.099408 kubelet[2203]: E1002 19:15:38.099111 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:38.116932 systemd[1]: Started cri-containerd-155107565c768041c38ebd72122d4dc0d4b92ad39745b1eef0c18296572b11e7.scope. Oct 2 19:15:38.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.162000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.162000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.170000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.173941 kernel: audit: type=1400 audit(1696274138.162:641): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.170000 audit: BPF prog-id=73 op=LOAD Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2306 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:38.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363863363136646465343133666632303336393531646561656664 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2306 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:38.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363863363136646465343133666632303336393531646561656664 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit: BPF prog-id=74 op=LOAD Oct 2 19:15:38.171000 audit[2326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2306 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:38.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363863363136646465343133666632303336393531646561656664 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit: BPF prog-id=75 op=LOAD Oct 2 19:15:38.171000 audit[2326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2306 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:38.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363863363136646465343133666632303336393531646561656664 Oct 2 19:15:38.171000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:15:38.171000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.171000 audit: BPF prog-id=76 op=LOAD Oct 2 19:15:38.171000 audit[2326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2306 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:38.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363863363136646465343133666632303336393531646561656664 Oct 2 19:15:38.193000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.193000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.193000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.193000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.194000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.194000 audit: BPF prog-id=77 op=LOAD Oct 2 19:15:38.195000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.195000 audit[2324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2305 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:38.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135353130373536356337363830343163333865626437323132326434 Oct 2 19:15:38.197000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.197000 audit[2324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2305 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:38.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135353130373536356337363830343163333865626437323132326434 Oct 2 19:15:38.198000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.198000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.198000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.198000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.198000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.198000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.198000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.198000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.198000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.198000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.198000 audit: BPF prog-id=78 op=LOAD Oct 2 19:15:38.198000 audit[2324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2305 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:38.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135353130373536356337363830343163333865626437323132326434 Oct 2 19:15:38.199000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.199000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.199000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.199000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.199000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.199000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.199000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.199000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.199000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.199000 audit: BPF prog-id=79 op=LOAD Oct 2 19:15:38.199000 audit[2324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2305 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:38.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135353130373536356337363830343163333865626437323132326434 Oct 2 19:15:38.200000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:15:38.200000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:15:38.200000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.200000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.200000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.200000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.200000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.200000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.200000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.200000 audit[2324]: AVC avc: denied { perfmon } for pid=2324 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.200000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.200000 audit[2324]: AVC avc: denied { bpf } for pid=2324 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:38.200000 audit: BPF prog-id=80 op=LOAD Oct 2 19:15:38.200000 audit[2324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2305 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:38.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135353130373536356337363830343163333865626437323132326434 Oct 2 19:15:38.227064 env[1743]: time="2023-10-02T19:15:38.226975360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlqcm,Uid:ec8678b8-87b9-47df-9317-82b9208c54aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\"" Oct 2 19:15:38.231242 env[1743]: time="2023-10-02T19:15:38.231162380Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:15:38.246993 env[1743]: time="2023-10-02T19:15:38.246834992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fnjn4,Uid:fd827912-9cad-4b1b-8f39-e51ea9d32d2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"155107565c768041c38ebd72122d4dc0d4b92ad39745b1eef0c18296572b11e7\"" Oct 2 19:15:39.099484 kubelet[2203]: E1002 19:15:39.099429 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:40.081590 kubelet[2203]: E1002 19:15:40.081531 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:40.099964 kubelet[2203]: E1002 19:15:40.099912 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:41.100703 kubelet[2203]: E1002 19:15:41.100633 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:42.102820 kubelet[2203]: E1002 19:15:42.102760 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:43.102992 kubelet[2203]: E1002 19:15:43.102912 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:44.104132 kubelet[2203]: E1002 19:15:44.104059 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:44.757986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054693794.mount: Deactivated successfully. Oct 2 19:15:45.105221 kubelet[2203]: E1002 19:15:45.105021 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:46.105915 kubelet[2203]: E1002 19:15:46.105805 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:47.106221 kubelet[2203]: E1002 19:15:47.106130 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:48.106724 kubelet[2203]: E1002 19:15:48.106664 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:48.742552 env[1743]: time="2023-10-02T19:15:48.742468527Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:48.760202 env[1743]: time="2023-10-02T19:15:48.760109928Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:48.775654 env[1743]: time="2023-10-02T19:15:48.775571005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:48.777209 env[1743]: time="2023-10-02T19:15:48.777138082Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 2 19:15:48.780955 env[1743]: time="2023-10-02T19:15:48.780903316Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\"" Oct 2 19:15:48.782914 env[1743]: time="2023-10-02T19:15:48.782795629Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:15:48.851447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount979183319.mount: Deactivated successfully. Oct 2 19:15:49.103531 env[1743]: time="2023-10-02T19:15:49.103361250Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b\"" Oct 2 19:15:49.105928 env[1743]: time="2023-10-02T19:15:49.105839743Z" level=info msg="StartContainer for \"31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b\"" Oct 2 19:15:49.122946 kubelet[2203]: E1002 19:15:49.106984 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:49.174421 systemd[1]: Started cri-containerd-31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b.scope. Oct 2 19:15:49.217237 systemd[1]: cri-containerd-31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b.scope: Deactivated successfully. Oct 2 19:15:49.842740 systemd[1]: run-containerd-runc-k8s.io-31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b-runc.C2Y9Sn.mount: Deactivated successfully. Oct 2 19:15:49.842948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b-rootfs.mount: Deactivated successfully. Oct 2 19:15:49.861891 update_engine[1729]: I1002 19:15:49.860980 1729 update_attempter.cc:505] Updating boot flags... Oct 2 19:15:49.966746 env[1743]: time="2023-10-02T19:15:49.966637779Z" level=info msg="shim disconnected" id=31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b Oct 2 19:15:49.966746 env[1743]: time="2023-10-02T19:15:49.966741535Z" level=warning msg="cleaning up after shim disconnected" id=31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b namespace=k8s.io Oct 2 19:15:49.967394 env[1743]: time="2023-10-02T19:15:49.966763916Z" level=info msg="cleaning up dead shim" Oct 2 19:15:50.007229 env[1743]: time="2023-10-02T19:15:50.007138326Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2401 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:15:50.007749 env[1743]: time="2023-10-02T19:15:50.007597580Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Oct 2 19:15:50.011027 env[1743]: time="2023-10-02T19:15:50.010954484Z" level=error msg="Failed to pipe stdout of container \"31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b\"" error="reading from a closed fifo" Oct 2 19:15:50.011328 env[1743]: time="2023-10-02T19:15:50.011257914Z" level=error msg="Failed to pipe stderr of container \"31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b\"" error="reading from a closed fifo" Oct 2 19:15:50.016896 env[1743]: time="2023-10-02T19:15:50.016760271Z" level=error msg="StartContainer for \"31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:15:50.017279 kubelet[2203]: E1002 19:15:50.017184 2203 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b" Oct 2 19:15:50.017425 kubelet[2203]: E1002 19:15:50.017360 2203 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:15:50.017425 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:15:50.017425 kubelet[2203]: rm /hostbin/cilium-mount Oct 2 19:15:50.017425 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mk8rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:15:50.017793 kubelet[2203]: E1002 19:15:50.017426 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:15:50.108386 kubelet[2203]: E1002 19:15:50.107264 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:50.619640 env[1743]: time="2023-10-02T19:15:50.619552485Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:15:50.644544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184331134.mount: Deactivated successfully. Oct 2 19:15:50.672419 env[1743]: time="2023-10-02T19:15:50.669835312Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\"" Oct 2 19:15:50.676759 env[1743]: time="2023-10-02T19:15:50.676704285Z" level=info msg="StartContainer for \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\"" Oct 2 19:15:50.778364 systemd[1]: Started cri-containerd-154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f.scope. Oct 2 19:15:50.845926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1749986420.mount: Deactivated successfully. Oct 2 19:15:50.879370 systemd[1]: cri-containerd-154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f.scope: Deactivated successfully. Oct 2 19:15:50.895586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f-rootfs.mount: Deactivated successfully. Oct 2 19:15:50.974249 env[1743]: time="2023-10-02T19:15:50.973603541Z" level=info msg="shim disconnected" id=154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f Oct 2 19:15:50.974249 env[1743]: time="2023-10-02T19:15:50.973675279Z" level=warning msg="cleaning up after shim disconnected" id=154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f namespace=k8s.io Oct 2 19:15:50.974249 env[1743]: time="2023-10-02T19:15:50.973696196Z" level=info msg="cleaning up dead shim" Oct 2 19:15:51.059426 env[1743]: time="2023-10-02T19:15:51.059279924Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2617 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:15:51.060313 env[1743]: time="2023-10-02T19:15:51.060056095Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:15:51.060664 env[1743]: time="2023-10-02T19:15:51.060608280Z" level=error msg="Failed to pipe stdout of container \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\"" error="reading from a closed fifo" Oct 2 19:15:51.061759 env[1743]: time="2023-10-02T19:15:51.060947818Z" level=error msg="Failed to pipe stderr of container \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\"" error="reading from a closed fifo" Oct 2 19:15:51.065170 env[1743]: time="2023-10-02T19:15:51.065079663Z" level=error msg="StartContainer for \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:15:51.065418 kubelet[2203]: E1002 19:15:51.065381 2203 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f" Oct 2 19:15:51.065978 kubelet[2203]: E1002 19:15:51.065530 2203 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:15:51.065978 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:15:51.065978 kubelet[2203]: rm /hostbin/cilium-mount Oct 2 19:15:51.065978 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mk8rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:15:51.066392 kubelet[2203]: E1002 19:15:51.065595 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:15:51.108178 kubelet[2203]: E1002 19:15:51.108087 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:51.430355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057221795.mount: Deactivated successfully. Oct 2 19:15:51.619385 kubelet[2203]: I1002 19:15:51.619324 2203 scope.go:115] "RemoveContainer" containerID="31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b" Oct 2 19:15:51.620005 kubelet[2203]: I1002 19:15:51.619950 2203 scope.go:115] "RemoveContainer" containerID="31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b" Oct 2 19:15:51.622820 env[1743]: time="2023-10-02T19:15:51.622763348Z" level=info msg="RemoveContainer for \"31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b\"" Oct 2 19:15:51.626757 env[1743]: time="2023-10-02T19:15:51.626699227Z" level=info msg="RemoveContainer for \"31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b\" returns successfully" Oct 2 19:15:51.627275 env[1743]: time="2023-10-02T19:15:51.627236087Z" level=info msg="RemoveContainer for \"31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b\"" Oct 2 19:15:51.627465 env[1743]: time="2023-10-02T19:15:51.627428417Z" level=info msg="RemoveContainer for \"31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b\" returns successfully" Oct 2 19:15:51.628596 kubelet[2203]: E1002 19:15:51.628544 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:15:52.108703 kubelet[2203]: E1002 19:15:52.108654 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:52.127199 env[1743]: time="2023-10-02T19:15:52.127116855Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:52.131417 env[1743]: time="2023-10-02T19:15:52.131348114Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0393a046c6ac3c39d56f9b536c02216184f07904e0db26449490d0cb1d1fe343,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:52.135186 env[1743]: time="2023-10-02T19:15:52.135111444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:52.138835 env[1743]: time="2023-10-02T19:15:52.138769184Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\" returns image reference \"sha256:0393a046c6ac3c39d56f9b536c02216184f07904e0db26449490d0cb1d1fe343\"" Oct 2 19:15:52.139039 env[1743]: time="2023-10-02T19:15:52.137680873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d8c8e3e8fe630c3f2d84a22722d4891343196483ac4cc02c1ba9345b1bfc8a3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:52.143663 env[1743]: time="2023-10-02T19:15:52.143607564Z" level=info msg="CreateContainer within sandbox \"155107565c768041c38ebd72122d4dc0d4b92ad39745b1eef0c18296572b11e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:15:52.163462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340254197.mount: Deactivated successfully. Oct 2 19:15:52.176546 env[1743]: time="2023-10-02T19:15:52.176479972Z" level=info msg="CreateContainer within sandbox \"155107565c768041c38ebd72122d4dc0d4b92ad39745b1eef0c18296572b11e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7f1e5c0da1934483a2a23495f649946ec875468eb38126980600cbfeabf42556\"" Oct 2 19:15:52.177697 env[1743]: time="2023-10-02T19:15:52.177651385Z" level=info msg="StartContainer for \"7f1e5c0da1934483a2a23495f649946ec875468eb38126980600cbfeabf42556\"" Oct 2 19:15:52.228395 systemd[1]: Started cri-containerd-7f1e5c0da1934483a2a23495f649946ec875468eb38126980600cbfeabf42556.scope. Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.286579 kernel: kauditd_printk_skb: 113 callbacks suppressed Oct 2 19:15:52.286666 kernel: audit: type=1400 audit(1696274152.282:677): avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2305 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.305347 kernel: audit: type=1300 audit(1696274152.282:677): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2305 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766316535633064613139333434383361326132333439356636343939 Oct 2 19:15:52.316419 kernel: audit: type=1327 audit(1696274152.282:677): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766316535633064613139333434383361326132333439356636343939 Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.324771 kernel: audit: type=1400 audit(1696274152.282:678): avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.333869 kernel: audit: type=1400 audit(1696274152.282:678): avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.341404 kernel: audit: type=1400 audit(1696274152.282:678): avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.349084 kernel: audit: type=1400 audit(1696274152.282:678): avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.357558 kernel: audit: type=1400 audit(1696274152.282:678): avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.359920 kernel: audit: type=1400 audit(1696274152.282:678): avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.376302 kernel: audit: type=1400 audit(1696274152.282:678): avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.282000 audit: BPF prog-id=81 op=LOAD Oct 2 19:15:52.282000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2305 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.377437 env[1743]: time="2023-10-02T19:15:52.377373096Z" level=info msg="StartContainer for \"7f1e5c0da1934483a2a23495f649946ec875468eb38126980600cbfeabf42556\" returns successfully" Oct 2 19:15:52.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766316535633064613139333434383361326132333439356636343939 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit: BPF prog-id=82 op=LOAD Oct 2 19:15:52.283000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2305 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766316535633064613139333434383361326132333439356636343939 Oct 2 19:15:52.283000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:15:52.283000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { perfmon } for pid=2639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit[2639]: AVC avc: denied { bpf } for pid=2639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:52.283000 audit: BPF prog-id=83 op=LOAD Oct 2 19:15:52.283000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2305 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766316535633064613139333434383361326132333439356636343939 Oct 2 19:15:52.501000 audit[2691]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2691 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.501000 audit[2691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd74affa0 a2=0 a3=ffff8e54c6c0 items=0 ppid=2651 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:15:52.504000 audit[2692]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2692 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.504000 audit[2692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc48c5d70 a2=0 a3=ffffa0def6c0 items=0 ppid=2651 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.504000 audit[2693]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.504000 audit[2693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff63908d0 a2=0 a3=ffffa14656c0 items=0 ppid=2651 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:15:52.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:15:52.507000 audit[2694]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_chain pid=2694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.507000 audit[2694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe7987170 a2=0 a3=ffff839266c0 items=0 ppid=2651 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:15:52.509000 audit[2695]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.509000 audit[2695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc2b4500 a2=0 a3=ffffa2f576c0 items=0 ppid=2651 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:15:52.513000 audit[2696]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2696 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.513000 audit[2696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdd29e500 a2=0 a3=ffff94c1c6c0 items=0 ppid=2651 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:15:52.613000 audit[2697]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.613000 audit[2697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff404a280 a2=0 a3=ffffb15f46c0 items=0 ppid=2651 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:15:52.630482 kubelet[2203]: E1002 19:15:52.630433 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:15:52.629000 audit[2699]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.629000 audit[2699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff54efe80 a2=0 a3=ffffb5a486c0 items=0 ppid=2651 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.629000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:15:52.640567 kubelet[2203]: I1002 19:15:52.640529 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fnjn4" podStartSLOduration=-9.223372017214336e+09 pod.CreationTimestamp="2023-10-02 19:15:33 +0000 UTC" firstStartedPulling="2023-10-02 19:15:38.250779583 +0000 UTC m=+19.319483167" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:15:52.639623904 +0000 UTC m=+33.708327512" watchObservedRunningTime="2023-10-02 19:15:52.640439495 +0000 UTC m=+33.709143103" Oct 2 19:15:52.647000 audit[2702]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.647000 audit[2702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe16fa560 a2=0 a3=ffffb075f6c0 items=0 ppid=2651 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:15:52.650000 audit[2703]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.650000 audit[2703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1630350 a2=0 a3=ffff8b8196c0 items=0 ppid=2651 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.650000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:15:52.659000 audit[2705]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.659000 audit[2705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffebd40e40 a2=0 a3=ffff8c4366c0 items=0 ppid=2651 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:15:52.665000 audit[2706]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.665000 audit[2706]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6040480 a2=0 a3=ffffb9f0b6c0 items=0 ppid=2651 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:15:52.673000 audit[2708]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.673000 audit[2708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdc097cd0 a2=0 a3=ffff8a4416c0 items=0 ppid=2651 pid=2708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.673000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:15:52.685000 audit[2711]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.685000 audit[2711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd5e1e340 a2=0 a3=ffff87efc6c0 items=0 ppid=2651 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:15:52.689000 audit[2712]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2712 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.689000 audit[2712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe602b780 a2=0 a3=ffffa05a96c0 items=0 ppid=2651 pid=2712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:15:52.697000 audit[2714]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.697000 audit[2714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd1f29430 a2=0 a3=ffff80a796c0 items=0 ppid=2651 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:15:52.701000 audit[2715]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2715 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.701000 audit[2715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdda24890 a2=0 a3=ffffbea626c0 items=0 ppid=2651 pid=2715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:15:52.709000 audit[2717]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.709000 audit[2717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc3ba3590 a2=0 a3=ffff985556c0 items=0 ppid=2651 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:15:52.726000 audit[2720]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.726000 audit[2720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff2df0c80 a2=0 a3=ffffa2b2b6c0 items=0 ppid=2651 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:15:52.738000 audit[2723]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.738000 audit[2723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd8dadae0 a2=0 a3=ffffbde846c0 items=0 ppid=2651 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:15:52.744000 audit[2724]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2724 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.744000 audit[2724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe96588d0 a2=0 a3=ffff9b7c06c0 items=0 ppid=2651 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.744000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:15:52.752000 audit[2726]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2726 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.752000 audit[2726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc2600e80 a2=0 a3=ffff906d96c0 items=0 ppid=2651 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:52.764000 audit[2729]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:52.764000 audit[2729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc6f813e0 a2=0 a3=ffff866a26c0 items=0 ppid=2651 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:52.791000 audit[2733]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2733 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:15:52.791000 audit[2733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=fffff5dd5570 a2=0 a3=ffff87e3c6c0 items=0 ppid=2651 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:52.807000 audit[2733]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=2733 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:15:52.807000 audit[2733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffff5dd5570 a2=0 a3=ffff87e3c6c0 items=0 ppid=2651 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:52.818000 audit[2737]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2737 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.818000 audit[2737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcc912fe0 a2=0 a3=ffff9ffef6c0 items=0 ppid=2651 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:15:52.828000 audit[2739]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2739 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.828000 audit[2739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd279e610 a2=0 a3=ffff8b5e06c0 items=0 ppid=2651 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:15:52.842000 audit[2742]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.842000 audit[2742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff624f1c0 a2=0 a3=ffff9aa496c0 items=0 ppid=2651 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:15:52.846000 audit[2743]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2743 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.846000 audit[2743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2454040 a2=0 a3=ffffa91e56c0 items=0 ppid=2651 pid=2743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.846000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:15:52.855000 audit[2745]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2745 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.855000 audit[2745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcf101460 a2=0 a3=ffff805f16c0 items=0 ppid=2651 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:15:52.859000 audit[2746]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2746 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.859000 audit[2746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0abdfa0 a2=0 a3=ffff88a666c0 items=0 ppid=2651 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:15:52.869000 audit[2748]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2748 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.869000 audit[2748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd2c86840 a2=0 a3=ffff8efdc6c0 items=0 ppid=2651 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:15:52.883000 audit[2751]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.883000 audit[2751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc3a05b40 a2=0 a3=ffffb498c6c0 items=0 ppid=2651 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:15:52.887000 audit[2752]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2752 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.887000 audit[2752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd747b160 a2=0 a3=ffffa1c546c0 items=0 ppid=2651 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:15:52.896000 audit[2754]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.896000 audit[2754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffdb93bc0 a2=0 a3=ffff8d0556c0 items=0 ppid=2651 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.896000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:15:52.900000 audit[2755]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.900000 audit[2755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffecc346a0 a2=0 a3=ffffa1a3b6c0 items=0 ppid=2651 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:15:52.909000 audit[2757]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.909000 audit[2757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffea6c8f60 a2=0 a3=ffffb49656c0 items=0 ppid=2651 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:15:52.921000 audit[2760]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2760 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.921000 audit[2760]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffef1d3fa0 a2=0 a3=ffff95dfb6c0 items=0 ppid=2651 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:15:52.936000 audit[2763]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2763 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.936000 audit[2763]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcc02f330 a2=0 a3=ffffbe3196c0 items=0 ppid=2651 pid=2763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:15:52.940000 audit[2764]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2764 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.940000 audit[2764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffccd621f0 a2=0 a3=ffffbc8426c0 items=0 ppid=2651 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:15:52.948000 audit[2766]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.948000 audit[2766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff2520330 a2=0 a3=ffff87cfe6c0 items=0 ppid=2651 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:52.960000 audit[2769]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2769 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:52.960000 audit[2769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe63d83c0 a2=0 a3=ffffb94ed6c0 items=0 ppid=2651 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:52.978000 audit[2773]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2773 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:15:52.978000 audit[2773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe1a5d680 a2=0 a3=ffff9bbd56c0 items=0 ppid=2651 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.978000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:52.979000 audit[2773]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2773 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:15:52.979000 audit[2773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffe1a5d680 a2=0 a3=ffff9bbd56c0 items=0 ppid=2651 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:52.979000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:53.092101 kubelet[2203]: W1002 19:15:53.091988 2203 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8678b8_87b9_47df_9317_82b9208c54aa.slice/cri-containerd-31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b.scope WatchSource:0}: container "31128ecb959bb670d7f2cf39d8ae7e41b06f4b657dfbb4a0243983d261db510b" in namespace "k8s.io": not found Oct 2 19:15:53.109351 kubelet[2203]: E1002 19:15:53.109312 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:54.110779 kubelet[2203]: E1002 19:15:54.110724 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:55.112276 kubelet[2203]: E1002 19:15:55.112219 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:56.113236 kubelet[2203]: E1002 19:15:56.113165 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:56.200032 kubelet[2203]: W1002 19:15:56.199985 2203 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8678b8_87b9_47df_9317_82b9208c54aa.slice/cri-containerd-154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f.scope WatchSource:0}: task 154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f not found: not found Oct 2 19:15:57.113505 kubelet[2203]: E1002 19:15:57.113435 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:58.114506 kubelet[2203]: E1002 19:15:58.114452 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:59.115541 kubelet[2203]: E1002 19:15:59.115451 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:00.081721 kubelet[2203]: E1002 19:16:00.081679 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:00.115992 kubelet[2203]: E1002 19:16:00.115933 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:01.116598 kubelet[2203]: E1002 19:16:01.116564 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:02.118171 kubelet[2203]: E1002 19:16:02.118128 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:03.119720 kubelet[2203]: E1002 19:16:03.119645 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:04.120096 kubelet[2203]: E1002 19:16:04.120051 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:05.121437 kubelet[2203]: E1002 19:16:05.121402 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:06.122165 kubelet[2203]: E1002 19:16:06.122130 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:06.549212 env[1743]: time="2023-10-02T19:16:06.548100153Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:16:06.572516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825699607.mount: Deactivated successfully. Oct 2 19:16:06.581474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2252080489.mount: Deactivated successfully. Oct 2 19:16:06.589177 env[1743]: time="2023-10-02T19:16:06.589088374Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\"" Oct 2 19:16:06.590595 env[1743]: time="2023-10-02T19:16:06.590536778Z" level=info msg="StartContainer for \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\"" Oct 2 19:16:06.642144 systemd[1]: Started cri-containerd-fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f.scope. Oct 2 19:16:06.679067 systemd[1]: cri-containerd-fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f.scope: Deactivated successfully. Oct 2 19:16:06.839330 env[1743]: time="2023-10-02T19:16:06.838780642Z" level=info msg="shim disconnected" id=fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f Oct 2 19:16:06.839330 env[1743]: time="2023-10-02T19:16:06.838855007Z" level=warning msg="cleaning up after shim disconnected" id=fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f namespace=k8s.io Oct 2 19:16:06.839330 env[1743]: time="2023-10-02T19:16:06.838899336Z" level=info msg="cleaning up dead shim" Oct 2 19:16:06.865614 env[1743]: time="2023-10-02T19:16:06.865518400Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:16:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2800 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:16:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:16:06.866186 env[1743]: time="2023-10-02T19:16:06.866068271Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:16:06.867060 env[1743]: time="2023-10-02T19:16:06.866975397Z" level=error msg="Failed to pipe stderr of container \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\"" error="reading from a closed fifo" Oct 2 19:16:06.867307 env[1743]: time="2023-10-02T19:16:06.867258552Z" level=error msg="Failed to pipe stdout of container \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\"" error="reading from a closed fifo" Oct 2 19:16:06.871534 env[1743]: time="2023-10-02T19:16:06.871461552Z" level=error msg="StartContainer for \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:16:06.872716 kubelet[2203]: E1002 19:16:06.871998 2203 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f" Oct 2 19:16:06.872716 kubelet[2203]: E1002 19:16:06.872156 2203 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:16:06.872716 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:16:06.872716 kubelet[2203]: rm /hostbin/cilium-mount Oct 2 19:16:06.873139 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mk8rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:16:06.873253 kubelet[2203]: E1002 19:16:06.872220 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:16:07.123371 kubelet[2203]: E1002 19:16:07.123304 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:07.567624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f-rootfs.mount: Deactivated successfully. Oct 2 19:16:07.667093 kubelet[2203]: I1002 19:16:07.667060 2203 scope.go:115] "RemoveContainer" containerID="154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f" Oct 2 19:16:07.667668 kubelet[2203]: I1002 19:16:07.667626 2203 scope.go:115] "RemoveContainer" containerID="154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f" Oct 2 19:16:07.670302 env[1743]: time="2023-10-02T19:16:07.670226923Z" level=info msg="RemoveContainer for \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\"" Oct 2 19:16:07.672383 env[1743]: time="2023-10-02T19:16:07.672247888Z" level=info msg="RemoveContainer for \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\"" Oct 2 19:16:07.673155 env[1743]: time="2023-10-02T19:16:07.673032289Z" level=error msg="RemoveContainer for \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\" failed" error="failed to set removing state for container \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\": container is already in removing state" Oct 2 19:16:07.673453 kubelet[2203]: E1002 19:16:07.673424 2203 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\": container is already in removing state" containerID="154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f" Oct 2 19:16:07.673627 kubelet[2203]: E1002 19:16:07.673605 2203 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f": container is already in removing state; Skipping pod "cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)" Oct 2 19:16:07.674776 kubelet[2203]: E1002 19:16:07.674172 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:16:07.675929 env[1743]: time="2023-10-02T19:16:07.675821143Z" level=info msg="RemoveContainer for \"154b8410b1f95676c77ada69216352ba93cfd500876d06d88b91deb8cf7b743f\" returns successfully" Oct 2 19:16:08.124010 kubelet[2203]: E1002 19:16:08.123964 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:09.125056 kubelet[2203]: E1002 19:16:09.124987 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:09.945471 kubelet[2203]: W1002 19:16:09.945384 2203 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8678b8_87b9_47df_9317_82b9208c54aa.slice/cri-containerd-fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f.scope WatchSource:0}: task fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f not found: not found Oct 2 19:16:10.125551 kubelet[2203]: E1002 19:16:10.125496 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:11.126559 kubelet[2203]: E1002 19:16:11.126513 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:12.127497 kubelet[2203]: E1002 19:16:12.127450 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:13.128563 kubelet[2203]: E1002 19:16:13.128521 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:14.129823 kubelet[2203]: E1002 19:16:14.129780 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:15.130807 kubelet[2203]: E1002 19:16:15.130756 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:16.132332 kubelet[2203]: E1002 19:16:16.132268 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:17.133291 kubelet[2203]: E1002 19:16:17.133222 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:18.133586 kubelet[2203]: E1002 19:16:18.133520 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:19.134725 kubelet[2203]: E1002 19:16:19.134637 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:20.081973 kubelet[2203]: E1002 19:16:20.081930 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:20.136324 kubelet[2203]: E1002 19:16:20.136270 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:20.543633 kubelet[2203]: E1002 19:16:20.543590 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:16:21.136515 kubelet[2203]: E1002 19:16:21.136427 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:22.137071 kubelet[2203]: E1002 19:16:22.137030 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:23.138249 kubelet[2203]: E1002 19:16:23.138188 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:24.138797 kubelet[2203]: E1002 19:16:24.138741 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:25.139075 kubelet[2203]: E1002 19:16:25.139036 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:26.140802 kubelet[2203]: E1002 19:16:26.140760 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:27.142006 kubelet[2203]: E1002 19:16:27.141938 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:28.142673 kubelet[2203]: E1002 19:16:28.142612 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:29.143008 kubelet[2203]: E1002 19:16:29.142815 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:30.143993 kubelet[2203]: E1002 19:16:30.143719 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:31.145367 kubelet[2203]: E1002 19:16:31.145303 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:32.146484 kubelet[2203]: E1002 19:16:32.146411 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:33.147299 kubelet[2203]: E1002 19:16:33.147262 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:34.148836 kubelet[2203]: E1002 19:16:34.148791 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:34.549489 env[1743]: time="2023-10-02T19:16:34.549090341Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:16:34.566501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2054930095.mount: Deactivated successfully. Oct 2 19:16:34.577867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238692685.mount: Deactivated successfully. Oct 2 19:16:34.584807 env[1743]: time="2023-10-02T19:16:34.584743609Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\"" Oct 2 19:16:34.586281 env[1743]: time="2023-10-02T19:16:34.586208679Z" level=info msg="StartContainer for \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\"" Oct 2 19:16:34.635040 systemd[1]: Started cri-containerd-f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af.scope. Oct 2 19:16:34.677024 systemd[1]: cri-containerd-f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af.scope: Deactivated successfully. Oct 2 19:16:34.702589 env[1743]: time="2023-10-02T19:16:34.702517974Z" level=info msg="shim disconnected" id=f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af Oct 2 19:16:34.702898 env[1743]: time="2023-10-02T19:16:34.702590334Z" level=warning msg="cleaning up after shim disconnected" id=f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af namespace=k8s.io Oct 2 19:16:34.702898 env[1743]: time="2023-10-02T19:16:34.702615090Z" level=info msg="cleaning up dead shim" Oct 2 19:16:34.732365 env[1743]: time="2023-10-02T19:16:34.732286370Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:16:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2843 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:16:34Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:16:34.732843 env[1743]: time="2023-10-02T19:16:34.732755727Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:16:34.734034 env[1743]: time="2023-10-02T19:16:34.733962269Z" level=error msg="Failed to pipe stdout of container \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\"" error="reading from a closed fifo" Oct 2 19:16:34.734287 env[1743]: time="2023-10-02T19:16:34.734229354Z" level=error msg="Failed to pipe stderr of container \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\"" error="reading from a closed fifo" Oct 2 19:16:34.736976 env[1743]: time="2023-10-02T19:16:34.736900655Z" level=error msg="StartContainer for \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:16:34.738091 kubelet[2203]: E1002 19:16:34.737403 2203 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af" Oct 2 19:16:34.738091 kubelet[2203]: E1002 19:16:34.737535 2203 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:16:34.738091 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:16:34.738091 kubelet[2203]: rm /hostbin/cilium-mount Oct 2 19:16:34.738486 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mk8rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:16:34.738603 kubelet[2203]: E1002 19:16:34.737594 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:16:35.150540 kubelet[2203]: E1002 19:16:35.150452 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:35.561297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af-rootfs.mount: Deactivated successfully. Oct 2 19:16:35.735355 kubelet[2203]: I1002 19:16:35.735276 2203 scope.go:115] "RemoveContainer" containerID="fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f" Oct 2 19:16:35.737909 kubelet[2203]: I1002 19:16:35.736755 2203 scope.go:115] "RemoveContainer" containerID="fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f" Oct 2 19:16:35.758522 env[1743]: time="2023-10-02T19:16:35.758460365Z" level=info msg="RemoveContainer for \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\"" Oct 2 19:16:35.759984 env[1743]: time="2023-10-02T19:16:35.759905539Z" level=info msg="RemoveContainer for \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\"" Oct 2 19:16:35.760152 env[1743]: time="2023-10-02T19:16:35.760048700Z" level=error msg="RemoveContainer for \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\" failed" error="failed to set removing state for container \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\": container is already in removing state" Oct 2 19:16:35.760614 kubelet[2203]: E1002 19:16:35.760585 2203 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\": container is already in removing state" containerID="fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f" Oct 2 19:16:35.760920 kubelet[2203]: E1002 19:16:35.760860 2203 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f": container is already in removing state; Skipping pod "cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)" Oct 2 19:16:35.762491 kubelet[2203]: E1002 19:16:35.762456 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:16:35.763010 env[1743]: time="2023-10-02T19:16:35.762945337Z" level=info msg="RemoveContainer for \"fa48a56f5800539f437aec8dd7a9785da8a4efc2742b1109011e57fb4ecbf50f\" returns successfully" Oct 2 19:16:36.151574 kubelet[2203]: E1002 19:16:36.151304 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:37.151557 kubelet[2203]: E1002 19:16:37.151484 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:37.809377 kubelet[2203]: W1002 19:16:37.809317 2203 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8678b8_87b9_47df_9317_82b9208c54aa.slice/cri-containerd-f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af.scope WatchSource:0}: task f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af not found: not found Oct 2 19:16:38.152295 kubelet[2203]: E1002 19:16:38.152231 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:39.152715 kubelet[2203]: E1002 19:16:39.152648 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:40.082211 kubelet[2203]: E1002 19:16:40.082151 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:40.153501 kubelet[2203]: E1002 19:16:40.153456 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:41.154622 kubelet[2203]: E1002 19:16:41.154553 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:42.155076 kubelet[2203]: E1002 19:16:42.155013 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:43.155616 kubelet[2203]: E1002 19:16:43.155572 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:44.156637 kubelet[2203]: E1002 19:16:44.156577 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:45.157675 kubelet[2203]: E1002 19:16:45.157620 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:46.158324 kubelet[2203]: E1002 19:16:46.158262 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:47.158520 kubelet[2203]: E1002 19:16:47.158457 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:48.159153 kubelet[2203]: E1002 19:16:48.159117 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:48.544970 kubelet[2203]: E1002 19:16:48.544812 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:16:49.160627 kubelet[2203]: E1002 19:16:49.160558 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:50.160998 kubelet[2203]: E1002 19:16:50.160941 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:51.161504 kubelet[2203]: E1002 19:16:51.161464 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:52.162433 kubelet[2203]: E1002 19:16:52.162381 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:53.163286 kubelet[2203]: E1002 19:16:53.163247 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:54.164357 kubelet[2203]: E1002 19:16:54.164299 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:55.165323 kubelet[2203]: E1002 19:16:55.165288 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:56.166389 kubelet[2203]: E1002 19:16:56.166352 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:57.168136 kubelet[2203]: E1002 19:16:57.168073 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:58.168859 kubelet[2203]: E1002 19:16:58.168781 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:59.169769 kubelet[2203]: E1002 19:16:59.169723 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:00.082266 kubelet[2203]: E1002 19:17:00.082233 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:00.170865 kubelet[2203]: E1002 19:17:00.170832 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:01.171728 kubelet[2203]: E1002 19:17:01.171673 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:01.544930 kubelet[2203]: E1002 19:17:01.544782 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:17:02.172007 kubelet[2203]: E1002 19:17:02.171969 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:03.173008 kubelet[2203]: E1002 19:17:03.172945 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:04.174111 kubelet[2203]: E1002 19:17:04.174045 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:05.174781 kubelet[2203]: E1002 19:17:05.174745 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:06.175839 kubelet[2203]: E1002 19:17:06.175783 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:07.176140 kubelet[2203]: E1002 19:17:07.176081 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:08.177272 kubelet[2203]: E1002 19:17:08.177233 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:09.178144 kubelet[2203]: E1002 19:17:09.178107 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:10.179026 kubelet[2203]: E1002 19:17:10.178969 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:11.180125 kubelet[2203]: E1002 19:17:11.180073 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:12.181244 kubelet[2203]: E1002 19:17:12.181180 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:13.181889 kubelet[2203]: E1002 19:17:13.181826 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:14.182584 kubelet[2203]: E1002 19:17:14.182548 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:15.184360 kubelet[2203]: E1002 19:17:15.184298 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:15.547906 env[1743]: time="2023-10-02T19:17:15.547714104Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:17:15.570939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337618929.mount: Deactivated successfully. Oct 2 19:17:15.581479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753197421.mount: Deactivated successfully. Oct 2 19:17:15.588926 env[1743]: time="2023-10-02T19:17:15.588808846Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\"" Oct 2 19:17:15.590323 env[1743]: time="2023-10-02T19:17:15.590233039Z" level=info msg="StartContainer for \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\"" Oct 2 19:17:15.640445 systemd[1]: Started cri-containerd-b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148.scope. Oct 2 19:17:15.677854 systemd[1]: cri-containerd-b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148.scope: Deactivated successfully. Oct 2 19:17:15.704274 env[1743]: time="2023-10-02T19:17:15.704180074Z" level=info msg="shim disconnected" id=b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148 Oct 2 19:17:15.704274 env[1743]: time="2023-10-02T19:17:15.704259862Z" level=warning msg="cleaning up after shim disconnected" id=b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148 namespace=k8s.io Oct 2 19:17:15.704664 env[1743]: time="2023-10-02T19:17:15.704283874Z" level=info msg="cleaning up dead shim" Oct 2 19:17:15.741271 env[1743]: time="2023-10-02T19:17:15.741193984Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:17:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2884 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:17:15Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:17:15.741782 env[1743]: time="2023-10-02T19:17:15.741678223Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:17:15.743047 env[1743]: time="2023-10-02T19:17:15.742979272Z" level=error msg="Failed to pipe stderr of container \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\"" error="reading from a closed fifo" Oct 2 19:17:15.746043 env[1743]: time="2023-10-02T19:17:15.745970688Z" level=error msg="Failed to pipe stdout of container \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\"" error="reading from a closed fifo" Oct 2 19:17:15.748818 env[1743]: time="2023-10-02T19:17:15.748731306Z" level=error msg="StartContainer for \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:17:15.750387 kubelet[2203]: E1002 19:17:15.749372 2203 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148" Oct 2 19:17:15.750387 kubelet[2203]: E1002 19:17:15.749563 2203 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:17:15.750387 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:17:15.750387 kubelet[2203]: rm /hostbin/cilium-mount Oct 2 19:17:15.750814 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mk8rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:17:15.750975 kubelet[2203]: E1002 19:17:15.749674 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:17:15.815797 kubelet[2203]: I1002 19:17:15.814647 2203 scope.go:115] "RemoveContainer" containerID="f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af" Oct 2 19:17:15.815797 kubelet[2203]: I1002 19:17:15.815648 2203 scope.go:115] "RemoveContainer" containerID="f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af" Oct 2 19:17:15.818212 env[1743]: time="2023-10-02T19:17:15.818141208Z" level=info msg="RemoveContainer for \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\"" Oct 2 19:17:15.819225 env[1743]: time="2023-10-02T19:17:15.819156883Z" level=info msg="RemoveContainer for \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\"" Oct 2 19:17:15.820941 env[1743]: time="2023-10-02T19:17:15.820829790Z" level=error msg="RemoveContainer for \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\" failed" error="failed to set removing state for container \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\": container is already in removing state" Oct 2 19:17:15.821181 kubelet[2203]: E1002 19:17:15.821142 2203 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\": container is already in removing state" containerID="f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af" Oct 2 19:17:15.821294 kubelet[2203]: E1002 19:17:15.821202 2203 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af": container is already in removing state; Skipping pod "cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)" Oct 2 19:17:15.821643 kubelet[2203]: E1002 19:17:15.821603 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:17:15.824642 env[1743]: time="2023-10-02T19:17:15.824583091Z" level=info msg="RemoveContainer for \"f6e6a6fbc990271c03876215a2f38f026bad4c02ef8245311e65b07c6a3ad4af\" returns successfully" Oct 2 19:17:16.184452 kubelet[2203]: E1002 19:17:16.184416 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:16.561594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148-rootfs.mount: Deactivated successfully. Oct 2 19:17:17.185695 kubelet[2203]: E1002 19:17:17.185630 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:18.186232 kubelet[2203]: E1002 19:17:18.186196 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:18.810183 kubelet[2203]: W1002 19:17:18.810136 2203 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8678b8_87b9_47df_9317_82b9208c54aa.slice/cri-containerd-b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148.scope WatchSource:0}: task b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148 not found: not found Oct 2 19:17:19.187092 kubelet[2203]: E1002 19:17:19.187032 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:20.081579 kubelet[2203]: E1002 19:17:20.081518 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:20.176004 kubelet[2203]: E1002 19:17:20.175952 2203 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:17:20.187211 kubelet[2203]: E1002 19:17:20.187158 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:20.302154 kubelet[2203]: E1002 19:17:20.302095 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:21.187767 kubelet[2203]: E1002 19:17:21.187705 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:22.188370 kubelet[2203]: E1002 19:17:22.188325 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:23.189983 kubelet[2203]: E1002 19:17:23.189940 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:24.191131 kubelet[2203]: E1002 19:17:24.191065 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:25.191241 kubelet[2203]: E1002 19:17:25.191168 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:25.304546 kubelet[2203]: E1002 19:17:25.304493 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:26.191422 kubelet[2203]: E1002 19:17:26.191322 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:27.192185 kubelet[2203]: E1002 19:17:27.192145 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:28.193566 kubelet[2203]: E1002 19:17:28.193496 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:29.194619 kubelet[2203]: E1002 19:17:29.194574 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:30.196116 kubelet[2203]: E1002 19:17:30.196078 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:30.306255 kubelet[2203]: E1002 19:17:30.306179 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:31.197541 kubelet[2203]: E1002 19:17:31.197489 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:31.544847 kubelet[2203]: E1002 19:17:31.544320 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:17:32.198799 kubelet[2203]: E1002 19:17:32.198757 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:33.199861 kubelet[2203]: E1002 19:17:33.199819 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:34.200860 kubelet[2203]: E1002 19:17:34.200807 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:35.201763 kubelet[2203]: E1002 19:17:35.201713 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:35.308140 kubelet[2203]: E1002 19:17:35.308101 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:36.201959 kubelet[2203]: E1002 19:17:36.201892 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:37.202984 kubelet[2203]: E1002 19:17:37.202943 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:38.204241 kubelet[2203]: E1002 19:17:38.204178 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:39.204521 kubelet[2203]: E1002 19:17:39.204458 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:40.081455 kubelet[2203]: E1002 19:17:40.081415 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:40.205005 kubelet[2203]: E1002 19:17:40.204946 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:40.309067 kubelet[2203]: E1002 19:17:40.309014 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:41.205390 kubelet[2203]: E1002 19:17:41.205345 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:42.206684 kubelet[2203]: E1002 19:17:42.206620 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:43.207632 kubelet[2203]: E1002 19:17:43.207560 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:44.207734 kubelet[2203]: E1002 19:17:44.207686 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:45.209438 kubelet[2203]: E1002 19:17:45.209393 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:45.310706 kubelet[2203]: E1002 19:17:45.310675 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:45.544357 kubelet[2203]: E1002 19:17:45.544028 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:17:46.210327 kubelet[2203]: E1002 19:17:46.210241 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:47.211114 kubelet[2203]: E1002 19:17:47.211045 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:48.211619 kubelet[2203]: E1002 19:17:48.211553 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:49.212123 kubelet[2203]: E1002 19:17:49.212060 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:50.212945 kubelet[2203]: E1002 19:17:50.212904 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:50.312024 kubelet[2203]: E1002 19:17:50.311991 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:51.214529 kubelet[2203]: E1002 19:17:51.214482 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:52.215537 kubelet[2203]: E1002 19:17:52.215472 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:53.216183 kubelet[2203]: E1002 19:17:53.216115 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:54.216603 kubelet[2203]: E1002 19:17:54.216561 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:55.217556 kubelet[2203]: E1002 19:17:55.217490 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:55.313984 kubelet[2203]: E1002 19:17:55.313860 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:56.218669 kubelet[2203]: E1002 19:17:56.218607 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:57.219751 kubelet[2203]: E1002 19:17:57.219684 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:58.220420 kubelet[2203]: E1002 19:17:58.220360 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:59.220851 kubelet[2203]: E1002 19:17:59.220781 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:59.544784 kubelet[2203]: E1002 19:17:59.544366 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:18:00.082072 kubelet[2203]: E1002 19:18:00.081972 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:00.221918 kubelet[2203]: E1002 19:18:00.221864 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:00.314910 kubelet[2203]: E1002 19:18:00.314859 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:01.223647 kubelet[2203]: E1002 19:18:01.223575 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:02.224448 kubelet[2203]: E1002 19:18:02.224378 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:03.225223 kubelet[2203]: E1002 19:18:03.225149 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:04.225489 kubelet[2203]: E1002 19:18:04.225427 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:05.225739 kubelet[2203]: E1002 19:18:05.225665 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:05.316685 kubelet[2203]: E1002 19:18:05.316633 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:06.226415 kubelet[2203]: E1002 19:18:06.226370 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:07.227170 kubelet[2203]: E1002 19:18:07.227099 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:08.227545 kubelet[2203]: E1002 19:18:08.227482 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:09.227749 kubelet[2203]: E1002 19:18:09.227673 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:10.228716 kubelet[2203]: E1002 19:18:10.228675 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:10.318434 kubelet[2203]: E1002 19:18:10.318381 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:11.230279 kubelet[2203]: E1002 19:18:11.230236 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:12.231009 kubelet[2203]: E1002 19:18:12.230967 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:12.543921 kubelet[2203]: E1002 19:18:12.543517 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:18:13.232028 kubelet[2203]: E1002 19:18:13.231964 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:14.232628 kubelet[2203]: E1002 19:18:14.232555 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:15.233409 kubelet[2203]: E1002 19:18:15.233367 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:15.319739 kubelet[2203]: E1002 19:18:15.319708 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:16.234943 kubelet[2203]: E1002 19:18:16.234904 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:17.235958 kubelet[2203]: E1002 19:18:17.235906 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:18.236754 kubelet[2203]: E1002 19:18:18.236686 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:19.236920 kubelet[2203]: E1002 19:18:19.236847 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:20.081590 kubelet[2203]: E1002 19:18:20.081531 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:20.238452 kubelet[2203]: E1002 19:18:20.238377 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:20.321426 kubelet[2203]: E1002 19:18:20.321394 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:21.239496 kubelet[2203]: E1002 19:18:21.239422 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:22.240122 kubelet[2203]: E1002 19:18:22.240046 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:23.240638 kubelet[2203]: E1002 19:18:23.240590 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:24.241969 kubelet[2203]: E1002 19:18:24.241922 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:25.242907 kubelet[2203]: E1002 19:18:25.242839 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:25.323394 kubelet[2203]: E1002 19:18:25.323347 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:26.243590 kubelet[2203]: E1002 19:18:26.243551 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:26.544360 kubelet[2203]: E1002 19:18:26.544225 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:18:27.245234 kubelet[2203]: E1002 19:18:27.245165 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:28.246329 kubelet[2203]: E1002 19:18:28.246286 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:29.247862 kubelet[2203]: E1002 19:18:29.247789 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:30.248614 kubelet[2203]: E1002 19:18:30.248553 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:30.324287 kubelet[2203]: E1002 19:18:30.324255 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:31.249586 kubelet[2203]: E1002 19:18:31.249508 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:32.250576 kubelet[2203]: E1002 19:18:32.250524 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:33.252208 kubelet[2203]: E1002 19:18:33.252164 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:34.253331 kubelet[2203]: E1002 19:18:34.253287 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:35.254140 kubelet[2203]: E1002 19:18:35.254096 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:35.325219 kubelet[2203]: E1002 19:18:35.325189 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:36.255581 kubelet[2203]: E1002 19:18:36.255538 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:37.257139 kubelet[2203]: E1002 19:18:37.257053 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:38.258234 kubelet[2203]: E1002 19:18:38.258167 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:39.259259 kubelet[2203]: E1002 19:18:39.259188 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.082050 kubelet[2203]: E1002 19:18:40.082012 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.259344 kubelet[2203]: E1002 19:18:40.259308 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.326105 kubelet[2203]: E1002 19:18:40.326048 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:41.260096 kubelet[2203]: E1002 19:18:41.260051 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:41.547751 env[1743]: time="2023-10-02T19:18:41.547589504Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:18:41.567745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258091620.mount: Deactivated successfully. Oct 2 19:18:41.576054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032791637.mount: Deactivated successfully. Oct 2 19:18:41.583519 env[1743]: time="2023-10-02T19:18:41.583434157Z" level=info msg="CreateContainer within sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd\"" Oct 2 19:18:41.584268 env[1743]: time="2023-10-02T19:18:41.584214459Z" level=info msg="StartContainer for \"43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd\"" Oct 2 19:18:41.633690 systemd[1]: Started cri-containerd-43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd.scope. Oct 2 19:18:41.670333 systemd[1]: cri-containerd-43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd.scope: Deactivated successfully. Oct 2 19:18:41.694419 env[1743]: time="2023-10-02T19:18:41.694330523Z" level=info msg="shim disconnected" id=43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd Oct 2 19:18:41.694419 env[1743]: time="2023-10-02T19:18:41.694408416Z" level=warning msg="cleaning up after shim disconnected" id=43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd namespace=k8s.io Oct 2 19:18:41.694775 env[1743]: time="2023-10-02T19:18:41.694430520Z" level=info msg="cleaning up dead shim" Oct 2 19:18:41.722016 env[1743]: time="2023-10-02T19:18:41.721894151Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2931 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:41.722492 env[1743]: time="2023-10-02T19:18:41.722395200Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:18:41.722754 env[1743]: time="2023-10-02T19:18:41.722685312Z" level=error msg="Failed to pipe stdout of container \"43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd\"" error="reading from a closed fifo" Oct 2 19:18:41.723014 env[1743]: time="2023-10-02T19:18:41.722924377Z" level=error msg="Failed to pipe stderr of container \"43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd\"" error="reading from a closed fifo" Oct 2 19:18:41.725734 env[1743]: time="2023-10-02T19:18:41.725651095Z" level=error msg="StartContainer for \"43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:41.726162 kubelet[2203]: E1002 19:18:41.726109 2203 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd" Oct 2 19:18:41.726688 kubelet[2203]: E1002 19:18:41.726639 2203 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:41.726688 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:41.726688 kubelet[2203]: rm /hostbin/cilium-mount Oct 2 19:18:41.726688 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mk8rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:41.727071 kubelet[2203]: E1002 19:18:41.726738 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:18:41.988148 kubelet[2203]: I1002 19:18:41.987669 2203 scope.go:115] "RemoveContainer" containerID="b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148" Oct 2 19:18:41.988148 kubelet[2203]: I1002 19:18:41.988104 2203 scope.go:115] "RemoveContainer" containerID="b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148" Oct 2 19:18:41.990754 env[1743]: time="2023-10-02T19:18:41.990576432Z" level=info msg="RemoveContainer for \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\"" Oct 2 19:18:41.991199 env[1743]: time="2023-10-02T19:18:41.991155073Z" level=info msg="RemoveContainer for \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\"" Oct 2 19:18:41.991448 env[1743]: time="2023-10-02T19:18:41.991400390Z" level=error msg="RemoveContainer for \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\" failed" error="failed to set removing state for container \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\": container is already in removing state" Oct 2 19:18:41.992340 kubelet[2203]: E1002 19:18:41.991770 2203 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\": container is already in removing state" containerID="b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148" Oct 2 19:18:41.992340 kubelet[2203]: E1002 19:18:41.991826 2203 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148": container is already in removing state; Skipping pod "cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)" Oct 2 19:18:41.992340 kubelet[2203]: E1002 19:18:41.992293 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-zlqcm_kube-system(ec8678b8-87b9-47df-9317-82b9208c54aa)\"" pod="kube-system/cilium-zlqcm" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa Oct 2 19:18:42.000343 env[1743]: time="2023-10-02T19:18:42.000266057Z" level=info msg="RemoveContainer for \"b6d6094a3f885c5c93372dc9e27f204fee750f280e8af32d2c88c3bee3291148\" returns successfully" Oct 2 19:18:42.261513 kubelet[2203]: E1002 19:18:42.261386 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:42.560202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd-rootfs.mount: Deactivated successfully. Oct 2 19:18:43.262460 kubelet[2203]: E1002 19:18:43.262421 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:44.263888 kubelet[2203]: E1002 19:18:44.263788 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:44.806605 kubelet[2203]: W1002 19:18:44.806524 2203 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8678b8_87b9_47df_9317_82b9208c54aa.slice/cri-containerd-43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd.scope WatchSource:0}: task 43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd not found: not found Oct 2 19:18:45.264706 kubelet[2203]: E1002 19:18:45.264662 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:45.327715 kubelet[2203]: E1002 19:18:45.327664 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:46.265597 kubelet[2203]: E1002 19:18:46.265554 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:47.266914 kubelet[2203]: E1002 19:18:47.266837 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:48.267750 kubelet[2203]: E1002 19:18:48.267681 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:49.263838 env[1743]: time="2023-10-02T19:18:49.261021343Z" level=info msg="StopPodSandbox for \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\"" Oct 2 19:18:49.263838 env[1743]: time="2023-10-02T19:18:49.261128454Z" level=info msg="Container to stop \"43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:18:49.263551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c-shm.mount: Deactivated successfully. Oct 2 19:18:49.269092 kubelet[2203]: E1002 19:18:49.269043 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:49.280000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:18:49.281056 systemd[1]: cri-containerd-1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c.scope: Deactivated successfully. Oct 2 19:18:49.284530 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:18:49.284678 kernel: audit: type=1334 audit(1696274329.280:727): prog-id=73 op=UNLOAD Oct 2 19:18:49.289000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:18:49.293927 kernel: audit: type=1334 audit(1696274329.289:728): prog-id=76 op=UNLOAD Oct 2 19:18:49.338401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c-rootfs.mount: Deactivated successfully. Oct 2 19:18:49.359918 env[1743]: time="2023-10-02T19:18:49.359799763Z" level=info msg="shim disconnected" id=1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c Oct 2 19:18:49.360240 env[1743]: time="2023-10-02T19:18:49.360189820Z" level=warning msg="cleaning up after shim disconnected" id=1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c namespace=k8s.io Oct 2 19:18:49.360240 env[1743]: time="2023-10-02T19:18:49.360230392Z" level=info msg="cleaning up dead shim" Oct 2 19:18:49.387293 env[1743]: time="2023-10-02T19:18:49.387230124Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2962 runtime=io.containerd.runc.v2\n" Oct 2 19:18:49.387845 env[1743]: time="2023-10-02T19:18:49.387795800Z" level=info msg="TearDown network for sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" successfully" Oct 2 19:18:49.387989 env[1743]: time="2023-10-02T19:18:49.387843583Z" level=info msg="StopPodSandbox for \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" returns successfully" Oct 2 19:18:49.546450 kubelet[2203]: I1002 19:18:49.545534 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-cgroup\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546450 kubelet[2203]: I1002 19:18:49.546186 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cni-path\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546450 kubelet[2203]: I1002 19:18:49.546238 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-host-proc-sys-net\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546450 kubelet[2203]: I1002 19:18:49.546278 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-lib-modules\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546450 kubelet[2203]: I1002 19:18:49.546315 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-etc-cni-netd\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546450 kubelet[2203]: I1002 19:18:49.546361 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec8678b8-87b9-47df-9317-82b9208c54aa-hubble-tls\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546999 kubelet[2203]: I1002 19:18:49.546398 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-xtables-lock\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546999 kubelet[2203]: I1002 19:18:49.546446 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-config-path\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546999 kubelet[2203]: I1002 19:18:49.546487 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-run\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546999 kubelet[2203]: I1002 19:18:49.546524 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-hostproc\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546999 kubelet[2203]: I1002 19:18:49.546569 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk8rm\" (UniqueName: \"kubernetes.io/projected/ec8678b8-87b9-47df-9317-82b9208c54aa-kube-api-access-mk8rm\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.546999 kubelet[2203]: I1002 19:18:49.546608 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-host-proc-sys-kernel\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.547366 kubelet[2203]: I1002 19:18:49.546653 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec8678b8-87b9-47df-9317-82b9208c54aa-clustermesh-secrets\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.547366 kubelet[2203]: I1002 19:18:49.546697 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-bpf-maps\") pod \"ec8678b8-87b9-47df-9317-82b9208c54aa\" (UID: \"ec8678b8-87b9-47df-9317-82b9208c54aa\") " Oct 2 19:18:49.547366 kubelet[2203]: I1002 19:18:49.546764 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:49.547366 kubelet[2203]: I1002 19:18:49.546826 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:49.547366 kubelet[2203]: I1002 19:18:49.546864 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cni-path" (OuterVolumeSpecName: "cni-path") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:49.547679 kubelet[2203]: I1002 19:18:49.546935 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:49.547679 kubelet[2203]: I1002 19:18:49.546976 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:49.547679 kubelet[2203]: I1002 19:18:49.547014 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:49.548271 kubelet[2203]: I1002 19:18:49.548038 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-hostproc" (OuterVolumeSpecName: "hostproc") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:49.548271 kubelet[2203]: I1002 19:18:49.548145 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:49.548722 kubelet[2203]: W1002 19:18:49.548678 2203 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ec8678b8-87b9-47df-9317-82b9208c54aa/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:18:49.555204 kubelet[2203]: I1002 19:18:49.555153 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:18:49.555466 kubelet[2203]: I1002 19:18:49.555437 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:49.555637 kubelet[2203]: I1002 19:18:49.555610 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:49.560667 systemd[1]: var-lib-kubelet-pods-ec8678b8\x2d87b9\x2d47df\x2d9317\x2d82b9208c54aa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:18:49.564592 kubelet[2203]: I1002 19:18:49.563452 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8678b8-87b9-47df-9317-82b9208c54aa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:49.568924 systemd[1]: var-lib-kubelet-pods-ec8678b8\x2d87b9\x2d47df\x2d9317\x2d82b9208c54aa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmk8rm.mount: Deactivated successfully. Oct 2 19:18:49.572834 systemd[1]: var-lib-kubelet-pods-ec8678b8\x2d87b9\x2d47df\x2d9317\x2d82b9208c54aa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:18:49.575409 kubelet[2203]: I1002 19:18:49.575332 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8678b8-87b9-47df-9317-82b9208c54aa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:18:49.575948 kubelet[2203]: I1002 19:18:49.575903 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8678b8-87b9-47df-9317-82b9208c54aa-kube-api-access-mk8rm" (OuterVolumeSpecName: "kube-api-access-mk8rm") pod "ec8678b8-87b9-47df-9317-82b9208c54aa" (UID: "ec8678b8-87b9-47df-9317-82b9208c54aa"). InnerVolumeSpecName "kube-api-access-mk8rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:49.647356 kubelet[2203]: I1002 19:18:49.647288 2203 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-bpf-maps\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.647356 kubelet[2203]: I1002 19:18:49.647363 2203 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-cgroup\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.647629 kubelet[2203]: I1002 19:18:49.647392 2203 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cni-path\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.647629 kubelet[2203]: I1002 19:18:49.647417 2203 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-host-proc-sys-net\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.647629 kubelet[2203]: I1002 19:18:49.647446 2203 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-lib-modules\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.647629 kubelet[2203]: I1002 19:18:49.647469 2203 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec8678b8-87b9-47df-9317-82b9208c54aa-hubble-tls\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.647629 kubelet[2203]: I1002 19:18:49.647492 2203 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-xtables-lock\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.647629 kubelet[2203]: I1002 19:18:49.647516 2203 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-config-path\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.647629 kubelet[2203]: I1002 19:18:49.647538 2203 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-cilium-run\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.647629 kubelet[2203]: I1002 19:18:49.647559 2203 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-hostproc\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.648159 kubelet[2203]: I1002 19:18:49.647581 2203 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-etc-cni-netd\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.648159 kubelet[2203]: I1002 19:18:49.647609 2203 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-mk8rm\" (UniqueName: \"kubernetes.io/projected/ec8678b8-87b9-47df-9317-82b9208c54aa-kube-api-access-mk8rm\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.648159 kubelet[2203]: I1002 19:18:49.647632 2203 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec8678b8-87b9-47df-9317-82b9208c54aa-host-proc-sys-kernel\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:49.648159 kubelet[2203]: I1002 19:18:49.647657 2203 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec8678b8-87b9-47df-9317-82b9208c54aa-clustermesh-secrets\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:18:50.008116 kubelet[2203]: I1002 19:18:50.008069 2203 scope.go:115] "RemoveContainer" containerID="43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd" Oct 2 19:18:50.009661 env[1743]: time="2023-10-02T19:18:50.009587459Z" level=info msg="RemoveContainer for \"43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd\"" Oct 2 19:18:50.013871 env[1743]: time="2023-10-02T19:18:50.013787810Z" level=info msg="RemoveContainer for \"43e5731f554164ee1c757e1d0909f4e6291250e742b377b85e687a2093be87fd\" returns successfully" Oct 2 19:18:50.018623 systemd[1]: Removed slice kubepods-burstable-podec8678b8_87b9_47df_9317_82b9208c54aa.slice. Oct 2 19:18:50.271148 kubelet[2203]: E1002 19:18:50.270562 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:50.329094 kubelet[2203]: E1002 19:18:50.329061 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:50.546058 env[1743]: time="2023-10-02T19:18:50.545846336Z" level=info msg="StopPodSandbox for \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\"" Oct 2 19:18:50.546585 env[1743]: time="2023-10-02T19:18:50.545996503Z" level=info msg="TearDown network for sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" successfully" Oct 2 19:18:50.546585 env[1743]: time="2023-10-02T19:18:50.546403311Z" level=info msg="StopPodSandbox for \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" returns successfully" Oct 2 19:18:50.548643 kubelet[2203]: I1002 19:18:50.547918 2203 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ec8678b8-87b9-47df-9317-82b9208c54aa path="/var/lib/kubelet/pods/ec8678b8-87b9-47df-9317-82b9208c54aa/volumes" Oct 2 19:18:51.271475 kubelet[2203]: E1002 19:18:51.271413 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:52.272392 kubelet[2203]: E1002 19:18:52.272325 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.024860 kubelet[2203]: I1002 19:18:53.024802 2203 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:18:53.025070 kubelet[2203]: E1002 19:18:53.024919 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.025070 kubelet[2203]: E1002 19:18:53.024942 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.025070 kubelet[2203]: E1002 19:18:53.024961 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.025070 kubelet[2203]: E1002 19:18:53.025005 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.025070 kubelet[2203]: I1002 19:18:53.025037 2203 memory_manager.go:346] "RemoveStaleState removing state" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.025403 kubelet[2203]: I1002 19:18:53.025077 2203 memory_manager.go:346] "RemoveStaleState removing state" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.025403 kubelet[2203]: I1002 19:18:53.025098 2203 memory_manager.go:346] "RemoveStaleState removing state" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.025403 kubelet[2203]: I1002 19:18:53.025114 2203 memory_manager.go:346] "RemoveStaleState removing state" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.034301 systemd[1]: Created slice kubepods-besteffort-pode88f079c_c694_463c_835c_46bd593a45c4.slice. Oct 2 19:18:53.041591 kubelet[2203]: I1002 19:18:53.041550 2203 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:18:53.041849 kubelet[2203]: E1002 19:18:53.041826 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.042020 kubelet[2203]: I1002 19:18:53.041998 2203 memory_manager.go:346] "RemoveStaleState removing state" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.042149 kubelet[2203]: E1002 19:18:53.042127 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.042291 kubelet[2203]: I1002 19:18:53.042270 2203 memory_manager.go:346] "RemoveStaleState removing state" podUID="ec8678b8-87b9-47df-9317-82b9208c54aa" containerName="mount-cgroup" Oct 2 19:18:53.048470 kubelet[2203]: W1002 19:18:53.048408 2203 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.22.12" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.12' and this object Oct 2 19:18:53.050022 kubelet[2203]: E1002 19:18:53.048628 2203 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.22.12" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.12' and this object Oct 2 19:18:53.052116 systemd[1]: Created slice kubepods-burstable-pode8056063_246d_470a_9e16_717ad060bbe8.slice. Oct 2 19:18:53.055648 kubelet[2203]: W1002 19:18:53.055609 2203 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.22.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.12' and this object Oct 2 19:18:53.056148 kubelet[2203]: E1002 19:18:53.056121 2203 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.22.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.12' and this object Oct 2 19:18:53.056351 kubelet[2203]: W1002 19:18:53.055978 2203 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.22.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.12' and this object Oct 2 19:18:53.056513 kubelet[2203]: E1002 19:18:53.056490 2203 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.22.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.12' and this object Oct 2 19:18:53.056660 kubelet[2203]: W1002 19:18:53.056076 2203 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.22.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.12' and this object Oct 2 19:18:53.056785 kubelet[2203]: E1002 19:18:53.056764 2203 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.22.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.12' and this object Oct 2 19:18:53.167449 kubelet[2203]: I1002 19:18:53.167390 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t644\" (UniqueName: \"kubernetes.io/projected/e8056063-246d-470a-9e16-717ad060bbe8-kube-api-access-8t644\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.167622 kubelet[2203]: I1002 19:18:53.167501 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmvr\" (UniqueName: \"kubernetes.io/projected/e88f079c-c694-463c-835c-46bd593a45c4-kube-api-access-ksmvr\") pod \"cilium-operator-f59cbd8c6-wq87t\" (UID: \"e88f079c-c694-463c-835c-46bd593a45c4\") " pod="kube-system/cilium-operator-f59cbd8c6-wq87t" Oct 2 19:18:53.167622 kubelet[2203]: I1002 19:18:53.167575 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-hostproc\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.167792 kubelet[2203]: I1002 19:18:53.167647 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-lib-modules\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.167792 kubelet[2203]: I1002 19:18:53.167722 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-xtables-lock\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.167966 kubelet[2203]: I1002 19:18:53.167768 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8056063-246d-470a-9e16-717ad060bbe8-cilium-config-path\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.167966 kubelet[2203]: I1002 19:18:53.167840 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-host-proc-sys-net\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.167966 kubelet[2203]: I1002 19:18:53.167930 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-host-proc-sys-kernel\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.168156 kubelet[2203]: I1002 19:18:53.168003 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-bpf-maps\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.168156 kubelet[2203]: I1002 19:18:53.168049 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cilium-cgroup\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.168156 kubelet[2203]: I1002 19:18:53.168124 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cni-path\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.168359 kubelet[2203]: I1002 19:18:53.168194 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8056063-246d-470a-9e16-717ad060bbe8-clustermesh-secrets\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.168359 kubelet[2203]: I1002 19:18:53.168283 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8056063-246d-470a-9e16-717ad060bbe8-cilium-ipsec-secrets\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.168359 kubelet[2203]: I1002 19:18:53.168353 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cilium-run\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.168550 kubelet[2203]: I1002 19:18:53.168428 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e88f079c-c694-463c-835c-46bd593a45c4-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-wq87t\" (UID: \"e88f079c-c694-463c-835c-46bd593a45c4\") " pod="kube-system/cilium-operator-f59cbd8c6-wq87t" Oct 2 19:18:53.168550 kubelet[2203]: I1002 19:18:53.168486 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-etc-cni-netd\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.168671 kubelet[2203]: I1002 19:18:53.168553 2203 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8056063-246d-470a-9e16-717ad060bbe8-hubble-tls\") pod \"cilium-pn6mz\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " pod="kube-system/cilium-pn6mz" Oct 2 19:18:53.273028 kubelet[2203]: E1002 19:18:53.272961 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:54.241737 env[1743]: time="2023-10-02T19:18:54.241647658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-wq87t,Uid:e88f079c-c694-463c-835c-46bd593a45c4,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:54.270495 kubelet[2203]: E1002 19:18:54.270435 2203 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Oct 2 19:18:54.270495 kubelet[2203]: E1002 19:18:54.270482 2203 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-pn6mz: failed to sync secret cache: timed out waiting for the condition Oct 2 19:18:54.270738 kubelet[2203]: E1002 19:18:54.270592 2203 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8056063-246d-470a-9e16-717ad060bbe8-hubble-tls podName:e8056063-246d-470a-9e16-717ad060bbe8 nodeName:}" failed. No retries permitted until 2023-10-02 19:18:54.770559283 +0000 UTC m=+215.839262879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/e8056063-246d-470a-9e16-717ad060bbe8-hubble-tls") pod "cilium-pn6mz" (UID: "e8056063-246d-470a-9e16-717ad060bbe8") : failed to sync secret cache: timed out waiting for the condition Oct 2 19:18:54.270910 kubelet[2203]: E1002 19:18:54.270866 2203 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Oct 2 19:18:54.271008 kubelet[2203]: E1002 19:18:54.270965 2203 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8056063-246d-470a-9e16-717ad060bbe8-cilium-ipsec-secrets podName:e8056063-246d-470a-9e16-717ad060bbe8 nodeName:}" failed. No retries permitted until 2023-10-02 19:18:54.770942848 +0000 UTC m=+215.839646432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/e8056063-246d-470a-9e16-717ad060bbe8-cilium-ipsec-secrets") pod "cilium-pn6mz" (UID: "e8056063-246d-470a-9e16-717ad060bbe8") : failed to sync secret cache: timed out waiting for the condition Oct 2 19:18:54.273994 kubelet[2203]: E1002 19:18:54.273926 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:54.282593 env[1743]: time="2023-10-02T19:18:54.282465450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:54.282783 env[1743]: time="2023-10-02T19:18:54.282620957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:54.282783 env[1743]: time="2023-10-02T19:18:54.282705352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:54.283322 env[1743]: time="2023-10-02T19:18:54.283144945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375 pid=2992 runtime=io.containerd.runc.v2 Oct 2 19:18:54.320083 systemd[1]: Started cri-containerd-e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375.scope. Oct 2 19:18:54.334325 systemd[1]: run-containerd-runc-k8s.io-e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375-runc.URRydR.mount: Deactivated successfully. Oct 2 19:18:54.380000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.380000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.396099 kernel: audit: type=1400 audit(1696274334.380:729): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.396367 kernel: audit: type=1400 audit(1696274334.380:730): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.396421 kernel: audit: type=1400 audit(1696274334.380:731): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.380000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.380000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.411052 kernel: audit: type=1400 audit(1696274334.380:732): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.411205 kernel: audit: type=1400 audit(1696274334.380:733): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.380000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.380000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.427217 kernel: audit: type=1400 audit(1696274334.380:734): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.380000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.439359 kernel: audit: type=1400 audit(1696274334.380:735): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.380000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.463114 kernel: audit: type=1400 audit(1696274334.380:736): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.463234 kernel: audit: type=1400 audit(1696274334.380:737): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.463280 kernel: audit: type=1400 audit(1696274334.388:738): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.380000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit: BPF prog-id=84 op=LOAD Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400014db38 a2=10 a3=0 items=0 ppid=2992 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535346630366434646338613363643561653930323063356363346637 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400014d5a0 a2=3c a3=0 items=0 ppid=2992 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535346630366434646338613363643561653930323063356363346637 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.388000 audit: BPF prog-id=85 op=LOAD Oct 2 19:18:54.388000 audit[3002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014d8e0 a2=78 a3=0 items=0 ppid=2992 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535346630366434646338613363643561653930323063356363346637 Oct 2 19:18:54.395000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.395000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.395000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.395000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.395000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.395000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.395000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.395000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.395000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.395000 audit: BPF prog-id=86 op=LOAD Oct 2 19:18:54.395000 audit[3002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400014d670 a2=78 a3=0 items=0 ppid=2992 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535346630366434646338613363643561653930323063356363346637 Oct 2 19:18:54.403000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:18:54.403000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:18:54.403000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.403000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.403000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.403000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.403000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.403000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.403000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.403000 audit[3002]: AVC avc: denied { perfmon } for pid=3002 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.403000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.403000 audit[3002]: AVC avc: denied { bpf } for pid=3002 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.403000 audit: BPF prog-id=87 op=LOAD Oct 2 19:18:54.403000 audit[3002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014db40 a2=78 a3=0 items=0 ppid=2992 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535346630366434646338613363643561653930323063356363346637 Oct 2 19:18:54.506706 env[1743]: time="2023-10-02T19:18:54.502834541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-wq87t,Uid:e88f079c-c694-463c-835c-46bd593a45c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\"" Oct 2 19:18:54.508006 env[1743]: time="2023-10-02T19:18:54.507937155Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:18:54.863258 env[1743]: time="2023-10-02T19:18:54.863190087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pn6mz,Uid:e8056063-246d-470a-9e16-717ad060bbe8,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:54.902029 env[1743]: time="2023-10-02T19:18:54.901631513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:54.902029 env[1743]: time="2023-10-02T19:18:54.901701388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:54.902029 env[1743]: time="2023-10-02T19:18:54.901726756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:54.902556 env[1743]: time="2023-10-02T19:18:54.902131021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a pid=3035 runtime=io.containerd.runc.v2 Oct 2 19:18:54.932352 systemd[1]: Started cri-containerd-d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a.scope. Oct 2 19:18:54.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.971000 audit: BPF prog-id=88 op=LOAD Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=3035 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613666376563623466353663333439323138396264643565386432 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=3035 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613666376563623466353663333439323138396264643565386432 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit: BPF prog-id=89 op=LOAD Oct 2 19:18:54.973000 audit[3044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=3035 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613666376563623466353663333439323138396264643565386432 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.973000 audit: BPF prog-id=90 op=LOAD Oct 2 19:18:54.973000 audit[3044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=3035 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613666376563623466353663333439323138396264643565386432 Oct 2 19:18:54.974000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:18:54.974000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:18:54.974000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.974000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.974000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.974000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.974000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.974000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.974000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.974000 audit[3044]: AVC avc: denied { perfmon } for pid=3044 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.974000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.974000 audit[3044]: AVC avc: denied { bpf } for pid=3044 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:54.974000 audit: BPF prog-id=91 op=LOAD Oct 2 19:18:54.974000 audit[3044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=3035 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:54.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613666376563623466353663333439323138396264643565386432 Oct 2 19:18:55.006402 env[1743]: time="2023-10-02T19:18:55.006328896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pn6mz,Uid:e8056063-246d-470a-9e16-717ad060bbe8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\"" Oct 2 19:18:55.011190 env[1743]: time="2023-10-02T19:18:55.011114209Z" level=info msg="CreateContainer within sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:18:55.037964 env[1743]: time="2023-10-02T19:18:55.037901684Z" level=info msg="CreateContainer within sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\"" Oct 2 19:18:55.039533 env[1743]: time="2023-10-02T19:18:55.039464445Z" level=info msg="StartContainer for \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\"" Oct 2 19:18:55.081093 systemd[1]: Started cri-containerd-4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559.scope. Oct 2 19:18:55.116972 systemd[1]: cri-containerd-4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559.scope: Deactivated successfully. Oct 2 19:18:55.151543 env[1743]: time="2023-10-02T19:18:55.151470519Z" level=info msg="shim disconnected" id=4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559 Oct 2 19:18:55.151543 env[1743]: time="2023-10-02T19:18:55.151547895Z" level=warning msg="cleaning up after shim disconnected" id=4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559 namespace=k8s.io Oct 2 19:18:55.151955 env[1743]: time="2023-10-02T19:18:55.151570526Z" level=info msg="cleaning up dead shim" Oct 2 19:18:55.180027 env[1743]: time="2023-10-02T19:18:55.179953162Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3091 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:55Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:55.180531 env[1743]: time="2023-10-02T19:18:55.180443239Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Oct 2 19:18:55.181180 env[1743]: time="2023-10-02T19:18:55.181109630Z" level=error msg="Failed to pipe stderr of container \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\"" error="reading from a closed fifo" Oct 2 19:18:55.181894 env[1743]: time="2023-10-02T19:18:55.181827848Z" level=error msg="Failed to pipe stdout of container \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\"" error="reading from a closed fifo" Oct 2 19:18:55.184524 env[1743]: time="2023-10-02T19:18:55.184449601Z" level=error msg="StartContainer for \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:55.185078 kubelet[2203]: E1002 19:18:55.184791 2203 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559" Oct 2 19:18:55.185078 kubelet[2203]: E1002 19:18:55.184974 2203 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:55.185078 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:55.185078 kubelet[2203]: rm /hostbin/cilium-mount Oct 2 19:18:55.185515 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8t644,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:55.185679 kubelet[2203]: E1002 19:18:55.185035 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pn6mz" podUID=e8056063-246d-470a-9e16-717ad060bbe8 Oct 2 19:18:55.274626 kubelet[2203]: E1002 19:18:55.274581 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:55.330623 kubelet[2203]: E1002 19:18:55.330570 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:55.809637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111333947.mount: Deactivated successfully. Oct 2 19:18:56.030362 env[1743]: time="2023-10-02T19:18:56.030299916Z" level=info msg="CreateContainer within sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:18:56.060590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913997852.mount: Deactivated successfully. Oct 2 19:18:56.073296 env[1743]: time="2023-10-02T19:18:56.073232042Z" level=info msg="CreateContainer within sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\"" Oct 2 19:18:56.074528 env[1743]: time="2023-10-02T19:18:56.074477381Z" level=info msg="StartContainer for \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\"" Oct 2 19:18:56.126739 systemd[1]: Started cri-containerd-fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0.scope. Oct 2 19:18:56.168762 systemd[1]: cri-containerd-fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0.scope: Deactivated successfully. Oct 2 19:18:56.211609 env[1743]: time="2023-10-02T19:18:56.211529341Z" level=info msg="shim disconnected" id=fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0 Oct 2 19:18:56.211609 env[1743]: time="2023-10-02T19:18:56.211605097Z" level=warning msg="cleaning up after shim disconnected" id=fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0 namespace=k8s.io Oct 2 19:18:56.211983 env[1743]: time="2023-10-02T19:18:56.211628137Z" level=info msg="cleaning up dead shim" Oct 2 19:18:56.239599 env[1743]: time="2023-10-02T19:18:56.239523483Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3128 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:56.240095 env[1743]: time="2023-10-02T19:18:56.240008412Z" level=error msg="copy shim log" error="read /proc/self/fd/48: file already closed" Oct 2 19:18:56.242934 env[1743]: time="2023-10-02T19:18:56.242841616Z" level=error msg="Failed to pipe stderr of container \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\"" error="reading from a closed fifo" Oct 2 19:18:56.243060 env[1743]: time="2023-10-02T19:18:56.243018002Z" level=error msg="Failed to pipe stdout of container \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\"" error="reading from a closed fifo" Oct 2 19:18:56.251105 env[1743]: time="2023-10-02T19:18:56.251033368Z" level=error msg="StartContainer for \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:56.251920 kubelet[2203]: E1002 19:18:56.251531 2203 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0" Oct 2 19:18:56.251920 kubelet[2203]: E1002 19:18:56.251697 2203 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:56.251920 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:56.251920 kubelet[2203]: rm /hostbin/cilium-mount Oct 2 19:18:56.252296 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8t644,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:56.252415 kubelet[2203]: E1002 19:18:56.251754 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pn6mz" podUID=e8056063-246d-470a-9e16-717ad060bbe8 Oct 2 19:18:56.275214 kubelet[2203]: E1002 19:18:56.275159 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:56.946768 env[1743]: time="2023-10-02T19:18:56.946685681Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:56.950888 env[1743]: time="2023-10-02T19:18:56.950798231Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:56.953780 env[1743]: time="2023-10-02T19:18:56.953720702Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:56.955105 env[1743]: time="2023-10-02T19:18:56.955037561Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 2 19:18:56.958792 env[1743]: time="2023-10-02T19:18:56.958739822Z" level=info msg="CreateContainer within sandbox \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:18:56.985709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4219639447.mount: Deactivated successfully. Oct 2 19:18:56.995086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695380586.mount: Deactivated successfully. Oct 2 19:18:56.999103 env[1743]: time="2023-10-02T19:18:56.999027191Z" level=info msg="CreateContainer within sandbox \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\"" Oct 2 19:18:57.000150 env[1743]: time="2023-10-02T19:18:57.000063000Z" level=info msg="StartContainer for \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\"" Oct 2 19:18:57.030205 kubelet[2203]: I1002 19:18:57.029404 2203 scope.go:115] "RemoveContainer" containerID="4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559" Oct 2 19:18:57.030205 kubelet[2203]: I1002 19:18:57.029973 2203 scope.go:115] "RemoveContainer" containerID="4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559" Oct 2 19:18:57.032321 env[1743]: time="2023-10-02T19:18:57.032261436Z" level=info msg="RemoveContainer for \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\"" Oct 2 19:18:57.033124 env[1743]: time="2023-10-02T19:18:57.033066474Z" level=info msg="RemoveContainer for \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\"" Oct 2 19:18:57.033312 env[1743]: time="2023-10-02T19:18:57.033251645Z" level=error msg="RemoveContainer for \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\" failed" error="failed to set removing state for container \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\": container is already in removing state" Oct 2 19:18:57.033580 kubelet[2203]: E1002 19:18:57.033553 2203 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\": container is already in removing state" containerID="4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559" Oct 2 19:18:57.033766 kubelet[2203]: E1002 19:18:57.033742 2203 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559": container is already in removing state; Skipping pod "cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8)" Oct 2 19:18:57.034373 kubelet[2203]: E1002 19:18:57.034343 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8)\"" pod="kube-system/cilium-pn6mz" podUID=e8056063-246d-470a-9e16-717ad060bbe8 Oct 2 19:18:57.038713 env[1743]: time="2023-10-02T19:18:57.038650826Z" level=info msg="RemoveContainer for \"4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559\" returns successfully" Oct 2 19:18:57.051796 systemd[1]: Started cri-containerd-8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f.scope. Oct 2 19:18:57.090000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.090000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.091000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.091000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.091000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.091000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.091000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.091000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.091000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.091000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.091000 audit: BPF prog-id=92 op=LOAD Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2992 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:57.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836343063636262306136393362373365373333333936663830366331 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2992 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:57.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836343063636262306136393362373365373333333936663830366331 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.095000 audit: BPF prog-id=93 op=LOAD Oct 2 19:18:57.095000 audit[3150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2992 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:57.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836343063636262306136393362373365373333333936663830366331 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit: BPF prog-id=94 op=LOAD Oct 2 19:18:57.096000 audit[3150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2992 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:57.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836343063636262306136393362373365373333333936663830366331 Oct 2 19:18:57.096000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:18:57.096000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { perfmon } for pid=3150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit[3150]: AVC avc: denied { bpf } for pid=3150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:57.096000 audit: BPF prog-id=95 op=LOAD Oct 2 19:18:57.096000 audit[3150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2992 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:57.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836343063636262306136393362373365373333333936663830366331 Oct 2 19:18:57.133414 env[1743]: time="2023-10-02T19:18:57.133287580Z" level=info msg="StartContainer for \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\" returns successfully" Oct 2 19:18:57.204000 audit[3159]: AVC avc: denied { map_create } for pid=3159 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c223,c1023 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c223,c1023 tclass=bpf permissive=0 Oct 2 19:18:57.204000 audit[3159]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=400051f768 a2=48 a3=0 items=0 ppid=2992 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c223,c1023 key=(null) Oct 2 19:18:57.204000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:18:57.276130 kubelet[2203]: E1002 19:18:57.276063 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:58.040247 kubelet[2203]: E1002 19:18:58.040185 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8)\"" pod="kube-system/cilium-pn6mz" podUID=e8056063-246d-470a-9e16-717ad060bbe8 Oct 2 19:18:58.258090 kubelet[2203]: W1002 19:18:58.258020 2203 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8056063_246d_470a_9e16_717ad060bbe8.slice/cri-containerd-4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559.scope WatchSource:0}: container "4ab6f1865e8f19e331616f8233c25ebeef897aab9fa5c988e68a45b200cbe559" in namespace "k8s.io": not found Oct 2 19:18:58.276758 kubelet[2203]: E1002 19:18:58.276724 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:59.278128 kubelet[2203]: E1002 19:18:59.278075 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:00.081598 kubelet[2203]: E1002 19:19:00.081562 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:00.279053 kubelet[2203]: E1002 19:19:00.279016 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:00.332006 kubelet[2203]: E1002 19:19:00.331856 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:01.280279 kubelet[2203]: E1002 19:19:01.280208 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:01.367518 kubelet[2203]: W1002 19:19:01.367452 2203 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8056063_246d_470a_9e16_717ad060bbe8.slice/cri-containerd-fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0.scope WatchSource:0}: task fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0 not found: not found Oct 2 19:19:02.280707 kubelet[2203]: E1002 19:19:02.280656 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:03.281858 kubelet[2203]: E1002 19:19:03.281806 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:04.282827 kubelet[2203]: E1002 19:19:04.282785 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:05.284459 kubelet[2203]: E1002 19:19:05.284416 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:05.332855 kubelet[2203]: E1002 19:19:05.332794 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:06.285691 kubelet[2203]: E1002 19:19:06.285648 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:07.286664 kubelet[2203]: E1002 19:19:07.286607 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:08.287168 kubelet[2203]: E1002 19:19:08.287107 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:09.288128 kubelet[2203]: E1002 19:19:09.288086 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:10.289087 kubelet[2203]: E1002 19:19:10.289026 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:10.333831 kubelet[2203]: E1002 19:19:10.333799 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:11.290094 kubelet[2203]: E1002 19:19:11.290030 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:12.290822 kubelet[2203]: E1002 19:19:12.290751 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:12.548320 env[1743]: time="2023-10-02T19:19:12.547988678Z" level=info msg="CreateContainer within sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:19:12.566025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807770731.mount: Deactivated successfully. Oct 2 19:19:12.578229 env[1743]: time="2023-10-02T19:19:12.578145293Z" level=info msg="CreateContainer within sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\"" Oct 2 19:19:12.579732 env[1743]: time="2023-10-02T19:19:12.579657117Z" level=info msg="StartContainer for \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\"" Oct 2 19:19:12.636251 systemd[1]: Started cri-containerd-7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9.scope. Oct 2 19:19:12.672659 systemd[1]: cri-containerd-7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9.scope: Deactivated successfully. Oct 2 19:19:12.831544 env[1743]: time="2023-10-02T19:19:12.831386190Z" level=info msg="shim disconnected" id=7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9 Oct 2 19:19:12.832137 env[1743]: time="2023-10-02T19:19:12.832094510Z" level=warning msg="cleaning up after shim disconnected" id=7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9 namespace=k8s.io Oct 2 19:19:12.832296 env[1743]: time="2023-10-02T19:19:12.832267129Z" level=info msg="cleaning up dead shim" Oct 2 19:19:12.858402 env[1743]: time="2023-10-02T19:19:12.858327352Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3207 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:12.859127 env[1743]: time="2023-10-02T19:19:12.859047864Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:19:12.860003 env[1743]: time="2023-10-02T19:19:12.859488633Z" level=error msg="Failed to pipe stdout of container \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\"" error="reading from a closed fifo" Oct 2 19:19:12.860286 env[1743]: time="2023-10-02T19:19:12.860239841Z" level=error msg="Failed to pipe stderr of container \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\"" error="reading from a closed fifo" Oct 2 19:19:12.862500 env[1743]: time="2023-10-02T19:19:12.862437329Z" level=error msg="StartContainer for \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:12.863977 kubelet[2203]: E1002 19:19:12.863069 2203 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9" Oct 2 19:19:12.863977 kubelet[2203]: E1002 19:19:12.863253 2203 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:12.863977 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:12.863977 kubelet[2203]: rm /hostbin/cilium-mount Oct 2 19:19:12.864378 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8t644,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:12.864499 kubelet[2203]: E1002 19:19:12.863344 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pn6mz" podUID=e8056063-246d-470a-9e16-717ad060bbe8 Oct 2 19:19:13.081737 kubelet[2203]: I1002 19:19:13.081612 2203 scope.go:115] "RemoveContainer" containerID="fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0" Oct 2 19:19:13.082942 kubelet[2203]: I1002 19:19:13.082859 2203 scope.go:115] "RemoveContainer" containerID="fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0" Oct 2 19:19:13.087413 env[1743]: time="2023-10-02T19:19:13.087346160Z" level=info msg="RemoveContainer for \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\"" Oct 2 19:19:13.088088 env[1743]: time="2023-10-02T19:19:13.087996676Z" level=info msg="RemoveContainer for \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\"" Oct 2 19:19:13.088240 env[1743]: time="2023-10-02T19:19:13.088180599Z" level=error msg="RemoveContainer for \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\" failed" error="failed to set removing state for container \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\": container is already in removing state" Oct 2 19:19:13.088679 kubelet[2203]: E1002 19:19:13.088629 2203 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\": container is already in removing state" containerID="fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0" Oct 2 19:19:13.088794 kubelet[2203]: E1002 19:19:13.088695 2203 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0": container is already in removing state; Skipping pod "cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8)" Oct 2 19:19:13.089575 kubelet[2203]: E1002 19:19:13.089141 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8)\"" pod="kube-system/cilium-pn6mz" podUID=e8056063-246d-470a-9e16-717ad060bbe8 Oct 2 19:19:13.093123 env[1743]: time="2023-10-02T19:19:13.093061189Z" level=info msg="RemoveContainer for \"fefe6358d694539f967ee35841805e1fd92678487dec3761ba1c78d7c282b4f0\" returns successfully" Oct 2 19:19:13.105279 kubelet[2203]: I1002 19:19:13.105210 2203 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-wq87t" podStartSLOduration=-9.22337201574964e+09 pod.CreationTimestamp="2023-10-02 19:18:52 +0000 UTC" firstStartedPulling="2023-10-02 19:18:54.507013246 +0000 UTC m=+215.575716842" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:18:58.074267736 +0000 UTC m=+219.142971332" watchObservedRunningTime="2023-10-02 19:19:13.105137157 +0000 UTC m=+234.173840753" Oct 2 19:19:13.291693 kubelet[2203]: E1002 19:19:13.291648 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:13.561969 systemd[1]: run-containerd-runc-k8s.io-7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9-runc.B2iM9J.mount: Deactivated successfully. Oct 2 19:19:13.562135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9-rootfs.mount: Deactivated successfully. Oct 2 19:19:14.293417 kubelet[2203]: E1002 19:19:14.293349 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:15.294456 kubelet[2203]: E1002 19:19:15.294383 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:15.336316 kubelet[2203]: E1002 19:19:15.336265 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:15.938420 kubelet[2203]: W1002 19:19:15.938350 2203 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8056063_246d_470a_9e16_717ad060bbe8.slice/cri-containerd-7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9.scope WatchSource:0}: task 7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9 not found: not found Oct 2 19:19:16.294796 kubelet[2203]: E1002 19:19:16.294500 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:17.295189 kubelet[2203]: E1002 19:19:17.295145 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:18.296740 kubelet[2203]: E1002 19:19:18.296696 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:19.297682 kubelet[2203]: E1002 19:19:19.297638 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:20.082303 kubelet[2203]: E1002 19:19:20.082259 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:20.111850 env[1743]: time="2023-10-02T19:19:20.111519512Z" level=info msg="StopPodSandbox for \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\"" Oct 2 19:19:20.113224 env[1743]: time="2023-10-02T19:19:20.113092476Z" level=info msg="TearDown network for sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" successfully" Oct 2 19:19:20.113922 env[1743]: time="2023-10-02T19:19:20.113597098Z" level=info msg="StopPodSandbox for \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" returns successfully" Oct 2 19:19:20.114789 env[1743]: time="2023-10-02T19:19:20.114727768Z" level=info msg="RemovePodSandbox for \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\"" Oct 2 19:19:20.114989 env[1743]: time="2023-10-02T19:19:20.114789412Z" level=info msg="Forcibly stopping sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\"" Oct 2 19:19:20.114989 env[1743]: time="2023-10-02T19:19:20.114961899Z" level=info msg="TearDown network for sandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" successfully" Oct 2 19:19:20.122264 env[1743]: time="2023-10-02T19:19:20.122178377Z" level=info msg="RemovePodSandbox \"1168c616dde413ff2036951deaefd20d86b8ffd946af8eabe4da743790938c1c\" returns successfully" Oct 2 19:19:20.299064 kubelet[2203]: E1002 19:19:20.299028 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:20.337951 kubelet[2203]: E1002 19:19:20.337309 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:21.300049 kubelet[2203]: E1002 19:19:21.299986 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:22.300219 kubelet[2203]: E1002 19:19:22.300152 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:23.301070 kubelet[2203]: E1002 19:19:23.301029 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:23.544217 kubelet[2203]: E1002 19:19:23.544170 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8)\"" pod="kube-system/cilium-pn6mz" podUID=e8056063-246d-470a-9e16-717ad060bbe8 Oct 2 19:19:24.302788 kubelet[2203]: E1002 19:19:24.302734 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:25.304345 kubelet[2203]: E1002 19:19:25.304289 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:25.338177 kubelet[2203]: E1002 19:19:25.338125 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:26.305371 kubelet[2203]: E1002 19:19:26.305308 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:27.306137 kubelet[2203]: E1002 19:19:27.306066 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:28.307024 kubelet[2203]: E1002 19:19:28.306950 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:29.307591 kubelet[2203]: E1002 19:19:29.307548 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:30.309068 kubelet[2203]: E1002 19:19:30.308956 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:30.339098 kubelet[2203]: E1002 19:19:30.339050 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:31.310323 kubelet[2203]: E1002 19:19:31.310283 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:32.311958 kubelet[2203]: E1002 19:19:32.311868 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:33.312684 kubelet[2203]: E1002 19:19:33.312641 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:34.313663 kubelet[2203]: E1002 19:19:34.313622 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:35.314628 kubelet[2203]: E1002 19:19:35.314559 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:35.340584 kubelet[2203]: E1002 19:19:35.340535 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:35.547221 env[1743]: time="2023-10-02T19:19:35.547167697Z" level=info msg="CreateContainer within sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:19:35.565800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971980889.mount: Deactivated successfully. Oct 2 19:19:35.576173 env[1743]: time="2023-10-02T19:19:35.576105750Z" level=info msg="CreateContainer within sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd\"" Oct 2 19:19:35.577200 env[1743]: time="2023-10-02T19:19:35.577031414Z" level=info msg="StartContainer for \"1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd\"" Oct 2 19:19:35.625315 systemd[1]: Started cri-containerd-1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd.scope. Oct 2 19:19:35.663926 systemd[1]: cri-containerd-1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd.scope: Deactivated successfully. Oct 2 19:19:35.691615 env[1743]: time="2023-10-02T19:19:35.691548637Z" level=info msg="shim disconnected" id=1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd Oct 2 19:19:35.692150 env[1743]: time="2023-10-02T19:19:35.692105123Z" level=warning msg="cleaning up after shim disconnected" id=1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd namespace=k8s.io Oct 2 19:19:35.692296 env[1743]: time="2023-10-02T19:19:35.692268838Z" level=info msg="cleaning up dead shim" Oct 2 19:19:35.718100 env[1743]: time="2023-10-02T19:19:35.718035566Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3248 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:35.718804 env[1743]: time="2023-10-02T19:19:35.718727052Z" level=error msg="copy shim log" error="read /proc/self/fd/49: file already closed" Oct 2 19:19:35.722041 env[1743]: time="2023-10-02T19:19:35.721968252Z" level=error msg="Failed to pipe stdout of container \"1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd\"" error="reading from a closed fifo" Oct 2 19:19:35.722262 env[1743]: time="2023-10-02T19:19:35.722203271Z" level=error msg="Failed to pipe stderr of container \"1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd\"" error="reading from a closed fifo" Oct 2 19:19:35.724634 env[1743]: time="2023-10-02T19:19:35.724571391Z" level=error msg="StartContainer for \"1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:35.725869 kubelet[2203]: E1002 19:19:35.725129 2203 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd" Oct 2 19:19:35.725869 kubelet[2203]: E1002 19:19:35.725263 2203 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:35.725869 kubelet[2203]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:35.725869 kubelet[2203]: rm /hostbin/cilium-mount Oct 2 19:19:35.726245 kubelet[2203]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8t644,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:35.726364 kubelet[2203]: E1002 19:19:35.725319 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pn6mz" podUID=e8056063-246d-470a-9e16-717ad060bbe8 Oct 2 19:19:36.133481 kubelet[2203]: I1002 19:19:36.133437 2203 scope.go:115] "RemoveContainer" containerID="7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9" Oct 2 19:19:36.134550 kubelet[2203]: I1002 19:19:36.134503 2203 scope.go:115] "RemoveContainer" containerID="7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9" Oct 2 19:19:36.137395 env[1743]: time="2023-10-02T19:19:36.137303240Z" level=info msg="RemoveContainer for \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\"" Oct 2 19:19:36.139925 env[1743]: time="2023-10-02T19:19:36.139834859Z" level=info msg="RemoveContainer for \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\"" Oct 2 19:19:36.140343 env[1743]: time="2023-10-02T19:19:36.140273613Z" level=error msg="RemoveContainer for \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\" failed" error="failed to set removing state for container \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\": container is already in removing state" Oct 2 19:19:36.140789 kubelet[2203]: E1002 19:19:36.140752 2203 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\": container is already in removing state" containerID="7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9" Oct 2 19:19:36.140969 kubelet[2203]: E1002 19:19:36.140816 2203 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9": container is already in removing state; Skipping pod "cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8)" Oct 2 19:19:36.141314 kubelet[2203]: E1002 19:19:36.141275 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8)\"" pod="kube-system/cilium-pn6mz" podUID=e8056063-246d-470a-9e16-717ad060bbe8 Oct 2 19:19:36.143602 env[1743]: time="2023-10-02T19:19:36.143516710Z" level=info msg="RemoveContainer for \"7e6fc6398100af220cf67b046975ace4fd2fd0b6e62c459a451483acfccf46a9\" returns successfully" Oct 2 19:19:36.315638 kubelet[2203]: E1002 19:19:36.315595 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:36.560910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd-rootfs.mount: Deactivated successfully. Oct 2 19:19:37.316975 kubelet[2203]: E1002 19:19:37.316927 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:38.318306 kubelet[2203]: E1002 19:19:38.318252 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:38.797458 kubelet[2203]: W1002 19:19:38.797412 2203 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8056063_246d_470a_9e16_717ad060bbe8.slice/cri-containerd-1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd.scope WatchSource:0}: task 1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd not found: not found Oct 2 19:19:39.319628 kubelet[2203]: E1002 19:19:39.319582 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:40.082104 kubelet[2203]: E1002 19:19:40.082055 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:40.320708 kubelet[2203]: E1002 19:19:40.320644 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:40.342000 kubelet[2203]: E1002 19:19:40.341518 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:41.321200 kubelet[2203]: E1002 19:19:41.321135 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:42.322119 kubelet[2203]: E1002 19:19:42.322073 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:43.323042 kubelet[2203]: E1002 19:19:43.323000 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:44.324374 kubelet[2203]: E1002 19:19:44.324320 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:45.326148 kubelet[2203]: E1002 19:19:45.326079 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:45.342948 kubelet[2203]: E1002 19:19:45.342909 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:46.326972 kubelet[2203]: E1002 19:19:46.326931 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:46.544072 kubelet[2203]: E1002 19:19:46.544033 2203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pn6mz_kube-system(e8056063-246d-470a-9e16-717ad060bbe8)\"" pod="kube-system/cilium-pn6mz" podUID=e8056063-246d-470a-9e16-717ad060bbe8 Oct 2 19:19:47.328384 kubelet[2203]: E1002 19:19:47.328338 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:48.329281 kubelet[2203]: E1002 19:19:48.329210 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:49.330285 kubelet[2203]: E1002 19:19:49.330244 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:50.331990 kubelet[2203]: E1002 19:19:50.331861 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:50.343689 kubelet[2203]: E1002 19:19:50.343647 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:51.332373 kubelet[2203]: E1002 19:19:51.332332 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:52.333593 kubelet[2203]: E1002 19:19:52.333519 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:53.334726 kubelet[2203]: E1002 19:19:53.334653 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:54.335384 kubelet[2203]: E1002 19:19:54.335321 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:54.689088 env[1743]: time="2023-10-02T19:19:54.689023633Z" level=info msg="StopPodSandbox for \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\"" Oct 2 19:19:54.689728 env[1743]: time="2023-10-02T19:19:54.689120665Z" level=info msg="Container to stop \"1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:19:54.691568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a-shm.mount: Deactivated successfully. Oct 2 19:19:54.712329 systemd[1]: cri-containerd-d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a.scope: Deactivated successfully. Oct 2 19:19:54.710000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:19:54.715674 kernel: kauditd_printk_skb: 164 callbacks suppressed Oct 2 19:19:54.715757 kernel: audit: type=1334 audit(1696274394.710:784): prog-id=88 op=UNLOAD Oct 2 19:19:54.718000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:19:54.723975 kernel: audit: type=1334 audit(1696274394.718:785): prog-id=91 op=UNLOAD Oct 2 19:19:54.748522 env[1743]: time="2023-10-02T19:19:54.748452899Z" level=info msg="StopContainer for \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\" with timeout 30 (s)" Oct 2 19:19:54.749507 env[1743]: time="2023-10-02T19:19:54.749456336Z" level=info msg="Stop container \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\" with signal terminated" Oct 2 19:19:54.779000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:19:54.785068 kernel: audit: type=1334 audit(1696274394.779:786): prog-id=92 op=UNLOAD Oct 2 19:19:54.776531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a-rootfs.mount: Deactivated successfully. Oct 2 19:19:54.781123 systemd[1]: cri-containerd-8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f.scope: Deactivated successfully. Oct 2 19:19:54.785000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:19:54.790922 kernel: audit: type=1334 audit(1696274394.785:787): prog-id=95 op=UNLOAD Oct 2 19:19:54.804477 env[1743]: time="2023-10-02T19:19:54.804397121Z" level=info msg="shim disconnected" id=d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a Oct 2 19:19:54.804796 env[1743]: time="2023-10-02T19:19:54.804764440Z" level=warning msg="cleaning up after shim disconnected" id=d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a namespace=k8s.io Oct 2 19:19:54.804966 env[1743]: time="2023-10-02T19:19:54.804936675Z" level=info msg="cleaning up dead shim" Oct 2 19:19:54.832900 env[1743]: time="2023-10-02T19:19:54.832820875Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3294 runtime=io.containerd.runc.v2\n" Oct 2 19:19:54.833671 env[1743]: time="2023-10-02T19:19:54.833622005Z" level=info msg="TearDown network for sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" successfully" Oct 2 19:19:54.833859 env[1743]: time="2023-10-02T19:19:54.833823304Z" level=info msg="StopPodSandbox for \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" returns successfully" Oct 2 19:19:54.848756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f-rootfs.mount: Deactivated successfully. Oct 2 19:19:54.863173 env[1743]: time="2023-10-02T19:19:54.863111320Z" level=info msg="shim disconnected" id=8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f Oct 2 19:19:54.863499 env[1743]: time="2023-10-02T19:19:54.863466231Z" level=warning msg="cleaning up after shim disconnected" id=8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f namespace=k8s.io Oct 2 19:19:54.863651 env[1743]: time="2023-10-02T19:19:54.863612511Z" level=info msg="cleaning up dead shim" Oct 2 19:19:54.889591 env[1743]: time="2023-10-02T19:19:54.889533179Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3314 runtime=io.containerd.runc.v2\n" Oct 2 19:19:54.892734 env[1743]: time="2023-10-02T19:19:54.892679907Z" level=info msg="StopContainer for \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\" returns successfully" Oct 2 19:19:54.893722 env[1743]: time="2023-10-02T19:19:54.893670625Z" level=info msg="StopPodSandbox for \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\"" Oct 2 19:19:54.894074 env[1743]: time="2023-10-02T19:19:54.893757792Z" level=info msg="Container to stop \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:19:54.896390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375-shm.mount: Deactivated successfully. Oct 2 19:19:54.913000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:19:54.915253 systemd[1]: cri-containerd-e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375.scope: Deactivated successfully. Oct 2 19:19:54.919991 kernel: audit: type=1334 audit(1696274394.913:788): prog-id=84 op=UNLOAD Oct 2 19:19:54.920000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:19:54.926008 kernel: audit: type=1334 audit(1696274394.920:789): prog-id=87 op=UNLOAD Oct 2 19:19:54.929933 kubelet[2203]: I1002 19:19:54.929331 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8056063-246d-470a-9e16-717ad060bbe8-cilium-ipsec-secrets\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.929933 kubelet[2203]: I1002 19:19:54.929421 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cni-path\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.929933 kubelet[2203]: I1002 19:19:54.929463 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-etc-cni-netd\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.929933 kubelet[2203]: I1002 19:19:54.929501 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cilium-run\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.929933 kubelet[2203]: I1002 19:19:54.929543 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8056063-246d-470a-9e16-717ad060bbe8-hubble-tls\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.929933 kubelet[2203]: I1002 19:19:54.929586 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8056063-246d-470a-9e16-717ad060bbe8-cilium-config-path\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.930448 kubelet[2203]: I1002 19:19:54.929627 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-host-proc-sys-net\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.930448 kubelet[2203]: I1002 19:19:54.929666 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-hostproc\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.930448 kubelet[2203]: I1002 19:19:54.929702 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-lib-modules\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.930448 kubelet[2203]: I1002 19:19:54.929741 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-xtables-lock\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.930448 kubelet[2203]: I1002 19:19:54.929777 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-bpf-maps\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.930448 kubelet[2203]: I1002 19:19:54.929813 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cilium-cgroup\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.930919 kubelet[2203]: I1002 19:19:54.929859 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8056063-246d-470a-9e16-717ad060bbe8-clustermesh-secrets\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.930919 kubelet[2203]: I1002 19:19:54.930401 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:54.930919 kubelet[2203]: I1002 19:19:54.930479 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cni-path" (OuterVolumeSpecName: "cni-path") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:54.930919 kubelet[2203]: I1002 19:19:54.930523 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:54.930919 kubelet[2203]: I1002 19:19:54.930564 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:54.932420 kubelet[2203]: I1002 19:19:54.931310 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t644\" (UniqueName: \"kubernetes.io/projected/e8056063-246d-470a-9e16-717ad060bbe8-kube-api-access-8t644\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.932420 kubelet[2203]: I1002 19:19:54.931369 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-host-proc-sys-kernel\") pod \"e8056063-246d-470a-9e16-717ad060bbe8\" (UID: \"e8056063-246d-470a-9e16-717ad060bbe8\") " Oct 2 19:19:54.932420 kubelet[2203]: I1002 19:19:54.931416 2203 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-etc-cni-netd\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:54.932420 kubelet[2203]: I1002 19:19:54.931440 2203 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cilium-run\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:54.932420 kubelet[2203]: I1002 19:19:54.931464 2203 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cni-path\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:54.932420 kubelet[2203]: I1002 19:19:54.931488 2203 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-host-proc-sys-net\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:54.932420 kubelet[2203]: I1002 19:19:54.931533 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:54.932940 kubelet[2203]: I1002 19:19:54.931578 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-hostproc" (OuterVolumeSpecName: "hostproc") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:54.932940 kubelet[2203]: I1002 19:19:54.931617 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:54.932940 kubelet[2203]: I1002 19:19:54.931674 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:54.932940 kubelet[2203]: I1002 19:19:54.931714 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:54.932940 kubelet[2203]: I1002 19:19:54.931750 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:54.933259 kubelet[2203]: W1002 19:19:54.932816 2203 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e8056063-246d-470a-9e16-717ad060bbe8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:19:54.940211 kubelet[2203]: I1002 19:19:54.940028 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8056063-246d-470a-9e16-717ad060bbe8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:19:54.957535 systemd[1]: var-lib-kubelet-pods-e8056063\x2d246d\x2d470a\x2d9e16\x2d717ad060bbe8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:19:54.971491 kubelet[2203]: I1002 19:19:54.971408 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8056063-246d-470a-9e16-717ad060bbe8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:19:54.973619 kubelet[2203]: I1002 19:19:54.973568 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8056063-246d-470a-9e16-717ad060bbe8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:19:54.974511 kubelet[2203]: I1002 19:19:54.974471 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8056063-246d-470a-9e16-717ad060bbe8-kube-api-access-8t644" (OuterVolumeSpecName: "kube-api-access-8t644") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "kube-api-access-8t644". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:19:54.974856 kubelet[2203]: I1002 19:19:54.974769 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8056063-246d-470a-9e16-717ad060bbe8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e8056063-246d-470a-9e16-717ad060bbe8" (UID: "e8056063-246d-470a-9e16-717ad060bbe8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:19:55.020592 env[1743]: time="2023-10-02T19:19:55.020531589Z" level=info msg="shim disconnected" id=e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375 Oct 2 19:19:55.021068 env[1743]: time="2023-10-02T19:19:55.021030451Z" level=warning msg="cleaning up after shim disconnected" id=e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375 namespace=k8s.io Oct 2 19:19:55.021216 env[1743]: time="2023-10-02T19:19:55.021185407Z" level=info msg="cleaning up dead shim" Oct 2 19:19:55.032529 kubelet[2203]: I1002 19:19:55.032468 2203 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8056063-246d-470a-9e16-717ad060bbe8-hubble-tls\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.032529 kubelet[2203]: I1002 19:19:55.032525 2203 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-xtables-lock\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.032843 kubelet[2203]: I1002 19:19:55.032552 2203 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8056063-246d-470a-9e16-717ad060bbe8-cilium-config-path\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.032843 kubelet[2203]: I1002 19:19:55.032576 2203 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-hostproc\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.032843 kubelet[2203]: I1002 19:19:55.032602 2203 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-lib-modules\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.032843 kubelet[2203]: I1002 19:19:55.032625 2203 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-bpf-maps\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.032843 kubelet[2203]: I1002 19:19:55.032666 2203 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-cilium-cgroup\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.032843 kubelet[2203]: I1002 19:19:55.032693 2203 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8056063-246d-470a-9e16-717ad060bbe8-clustermesh-secrets\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.032843 kubelet[2203]: I1002 19:19:55.032718 2203 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-8t644\" (UniqueName: \"kubernetes.io/projected/e8056063-246d-470a-9e16-717ad060bbe8-kube-api-access-8t644\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.032843 kubelet[2203]: I1002 19:19:55.032741 2203 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8056063-246d-470a-9e16-717ad060bbe8-host-proc-sys-kernel\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.033365 kubelet[2203]: I1002 19:19:55.032764 2203 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8056063-246d-470a-9e16-717ad060bbe8-cilium-ipsec-secrets\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.048555 env[1743]: time="2023-10-02T19:19:55.048501265Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3352 runtime=io.containerd.runc.v2\n" Oct 2 19:19:55.049305 env[1743]: time="2023-10-02T19:19:55.049259639Z" level=info msg="TearDown network for sandbox \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\" successfully" Oct 2 19:19:55.049475 env[1743]: time="2023-10-02T19:19:55.049440575Z" level=info msg="StopPodSandbox for \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\" returns successfully" Oct 2 19:19:55.133129 kubelet[2203]: I1002 19:19:55.133065 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksmvr\" (UniqueName: \"kubernetes.io/projected/e88f079c-c694-463c-835c-46bd593a45c4-kube-api-access-ksmvr\") pod \"e88f079c-c694-463c-835c-46bd593a45c4\" (UID: \"e88f079c-c694-463c-835c-46bd593a45c4\") " Oct 2 19:19:55.133309 kubelet[2203]: I1002 19:19:55.133138 2203 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e88f079c-c694-463c-835c-46bd593a45c4-cilium-config-path\") pod \"e88f079c-c694-463c-835c-46bd593a45c4\" (UID: \"e88f079c-c694-463c-835c-46bd593a45c4\") " Oct 2 19:19:55.133803 kubelet[2203]: W1002 19:19:55.133430 2203 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e88f079c-c694-463c-835c-46bd593a45c4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:19:55.138599 kubelet[2203]: I1002 19:19:55.138516 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e88f079c-c694-463c-835c-46bd593a45c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e88f079c-c694-463c-835c-46bd593a45c4" (UID: "e88f079c-c694-463c-835c-46bd593a45c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:19:55.143502 kubelet[2203]: I1002 19:19:55.143452 2203 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e88f079c-c694-463c-835c-46bd593a45c4-kube-api-access-ksmvr" (OuterVolumeSpecName: "kube-api-access-ksmvr") pod "e88f079c-c694-463c-835c-46bd593a45c4" (UID: "e88f079c-c694-463c-835c-46bd593a45c4"). InnerVolumeSpecName "kube-api-access-ksmvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:19:55.188900 kubelet[2203]: I1002 19:19:55.188846 2203 scope.go:115] "RemoveContainer" containerID="1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd" Oct 2 19:19:55.190745 systemd[1]: Removed slice kubepods-burstable-pode8056063_246d_470a_9e16_717ad060bbe8.slice. Oct 2 19:19:55.197575 env[1743]: time="2023-10-02T19:19:55.197153342Z" level=info msg="RemoveContainer for \"1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd\"" Oct 2 19:19:55.201835 env[1743]: time="2023-10-02T19:19:55.201777015Z" level=info msg="RemoveContainer for \"1044acd15c9fb42dc9e5a3a11c949c3949c9e72f947afdf2cc74bbbbf51d1dbd\" returns successfully" Oct 2 19:19:55.202933 kubelet[2203]: I1002 19:19:55.202859 2203 scope.go:115] "RemoveContainer" containerID="8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f" Oct 2 19:19:55.205476 env[1743]: time="2023-10-02T19:19:55.205411518Z" level=info msg="RemoveContainer for \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\"" Oct 2 19:19:55.209358 env[1743]: time="2023-10-02T19:19:55.209297528Z" level=info msg="RemoveContainer for \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\" returns successfully" Oct 2 19:19:55.212222 kubelet[2203]: I1002 19:19:55.210999 2203 scope.go:115] "RemoveContainer" containerID="8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f" Oct 2 19:19:55.212402 env[1743]: time="2023-10-02T19:19:55.211712067Z" level=error msg="ContainerStatus for \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\": not found" Oct 2 19:19:55.211539 systemd[1]: Removed slice kubepods-besteffort-pode88f079c_c694_463c_835c_46bd593a45c4.slice. Oct 2 19:19:55.214161 kubelet[2203]: E1002 19:19:55.214127 2203 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\": not found" containerID="8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f" Oct 2 19:19:55.214491 kubelet[2203]: I1002 19:19:55.214469 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f} err="failed to get container status \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8640ccbb0a693b73e733396f806c1b44aa832d3d449b9a572382af626df7ff5f\": not found" Oct 2 19:19:55.233511 kubelet[2203]: I1002 19:19:55.233458 2203 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-ksmvr\" (UniqueName: \"kubernetes.io/projected/e88f079c-c694-463c-835c-46bd593a45c4-kube-api-access-ksmvr\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.233511 kubelet[2203]: I1002 19:19:55.233506 2203 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e88f079c-c694-463c-835c-46bd593a45c4-cilium-config-path\") on node \"172.31.22.12\" DevicePath \"\"" Oct 2 19:19:55.335938 kubelet[2203]: E1002 19:19:55.335870 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:55.345808 kubelet[2203]: E1002 19:19:55.345779 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:55.691367 systemd[1]: var-lib-kubelet-pods-e8056063\x2d246d\x2d470a\x2d9e16\x2d717ad060bbe8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:19:55.691537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375-rootfs.mount: Deactivated successfully. Oct 2 19:19:55.691690 systemd[1]: var-lib-kubelet-pods-e8056063\x2d246d\x2d470a\x2d9e16\x2d717ad060bbe8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:19:55.691830 systemd[1]: var-lib-kubelet-pods-e88f079c\x2dc694\x2d463c\x2d835c\x2d46bd593a45c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dksmvr.mount: Deactivated successfully. Oct 2 19:19:55.691991 systemd[1]: var-lib-kubelet-pods-e8056063\x2d246d\x2d470a\x2d9e16\x2d717ad060bbe8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8t644.mount: Deactivated successfully. Oct 2 19:19:56.336334 kubelet[2203]: E1002 19:19:56.336271 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:56.547620 kubelet[2203]: I1002 19:19:56.547562 2203 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e8056063-246d-470a-9e16-717ad060bbe8 path="/var/lib/kubelet/pods/e8056063-246d-470a-9e16-717ad060bbe8/volumes" Oct 2 19:19:56.548741 kubelet[2203]: I1002 19:19:56.548694 2203 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e88f079c-c694-463c-835c-46bd593a45c4 path="/var/lib/kubelet/pods/e88f079c-c694-463c-835c-46bd593a45c4/volumes" Oct 2 19:19:57.337402 kubelet[2203]: E1002 19:19:57.337342 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:58.338002 kubelet[2203]: E1002 19:19:58.337935 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:59.338489 kubelet[2203]: E1002 19:19:59.338394 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:00.081669 kubelet[2203]: E1002 19:20:00.081609 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:00.339164 kubelet[2203]: E1002 19:20:00.338929 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:00.353246 kubelet[2203]: E1002 19:20:00.348140 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:01.339301 kubelet[2203]: E1002 19:20:01.339237 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:02.340099 kubelet[2203]: E1002 19:20:02.340038 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:03.340718 kubelet[2203]: E1002 19:20:03.340679 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:04.342203 kubelet[2203]: E1002 19:20:04.342166 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:05.343457 kubelet[2203]: E1002 19:20:05.343396 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:05.349322 kubelet[2203]: E1002 19:20:05.349295 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:05.425220 amazon-ssm-agent[1716]: 2023-10-02 19:20:05 INFO Backing off health check to every 600 seconds for 1800 seconds. Oct 2 19:20:05.526617 amazon-ssm-agent[1716]: 2023-10-02 19:20:05 ERROR Health ping failed with error - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-0a883fec05341492a is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-0a883fec05341492a because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:20:05.526617 amazon-ssm-agent[1716]: status code: 400, request id: 67c0194a-2b95-45a1-8a55-62d3fddf4156 Oct 2 19:20:06.344359 kubelet[2203]: E1002 19:20:06.344299 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:07.344837 kubelet[2203]: E1002 19:20:07.344780 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:08.345935 kubelet[2203]: E1002 19:20:08.345862 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:09.346726 kubelet[2203]: E1002 19:20:09.346662 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:10.347385 kubelet[2203]: E1002 19:20:10.347328 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:10.351156 kubelet[2203]: E1002 19:20:10.351109 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:11.347853 kubelet[2203]: E1002 19:20:11.347788 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:12.349000 kubelet[2203]: E1002 19:20:12.348938 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:13.349286 kubelet[2203]: E1002 19:20:13.349249 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:14.350564 kubelet[2203]: E1002 19:20:14.350528 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:15.351309 kubelet[2203]: E1002 19:20:15.351256 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:15.351954 kubelet[2203]: E1002 19:20:15.351811 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:16.351964 kubelet[2203]: E1002 19:20:16.351901 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:17.352697 kubelet[2203]: E1002 19:20:17.352641 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:18.353457 kubelet[2203]: E1002 19:20:18.353398 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:18.463848 kubelet[2203]: E1002 19:20:18.463451 2203 controller.go:189] failed to update lease, error: Put "https://172.31.22.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.12?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Oct 2 19:20:18.518545 kubelet[2203]: E1002 19:20:18.518285 2203 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.22.12\": Get \"https://172.31.22.247:6443/api/v1/nodes/172.31.22.12?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 2 19:20:19.354086 kubelet[2203]: E1002 19:20:19.354051 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:20.081760 kubelet[2203]: E1002 19:20:20.081696 2203 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:20.125637 env[1743]: time="2023-10-02T19:20:20.125590041Z" level=info msg="StopPodSandbox for \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\"" Oct 2 19:20:20.126409 env[1743]: time="2023-10-02T19:20:20.126337460Z" level=info msg="TearDown network for sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" successfully" Oct 2 19:20:20.126536 env[1743]: time="2023-10-02T19:20:20.126503396Z" level=info msg="StopPodSandbox for \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" returns successfully" Oct 2 19:20:20.127120 env[1743]: time="2023-10-02T19:20:20.127080451Z" level=info msg="RemovePodSandbox for \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\"" Oct 2 19:20:20.127511 env[1743]: time="2023-10-02T19:20:20.127431270Z" level=info msg="Forcibly stopping sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\"" Oct 2 19:20:20.127730 env[1743]: time="2023-10-02T19:20:20.127695558Z" level=info msg="TearDown network for sandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" successfully" Oct 2 19:20:20.132207 env[1743]: time="2023-10-02T19:20:20.132158472Z" level=info msg="RemovePodSandbox \"d8a6f7ecb4f56c3492189bdd5e8d25e71a3fe0d6e99c9bdd050703862ab22c7a\" returns successfully" Oct 2 19:20:20.133005 env[1743]: time="2023-10-02T19:20:20.132961295Z" level=info msg="StopPodSandbox for \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\"" Oct 2 19:20:20.133296 env[1743]: time="2023-10-02T19:20:20.133233634Z" level=info msg="TearDown network for sandbox \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\" successfully" Oct 2 19:20:20.133441 env[1743]: time="2023-10-02T19:20:20.133404586Z" level=info msg="StopPodSandbox for \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\" returns successfully" Oct 2 19:20:20.134057 env[1743]: time="2023-10-02T19:20:20.134017473Z" level=info msg="RemovePodSandbox for \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\"" Oct 2 19:20:20.134267 env[1743]: time="2023-10-02T19:20:20.134211129Z" level=info msg="Forcibly stopping sandbox \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\"" Oct 2 19:20:20.134458 env[1743]: time="2023-10-02T19:20:20.134424717Z" level=info msg="TearDown network for sandbox \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\" successfully" Oct 2 19:20:20.140019 env[1743]: time="2023-10-02T19:20:20.139969057Z" level=info msg="RemovePodSandbox \"e54f06d4dc8a3cd5ae9020c5cc4f74f70483fbbd7bb4dcc905cf732b95ec1375\" returns successfully" Oct 2 19:20:20.176740 kubelet[2203]: W1002 19:20:20.176708 2203 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:20:20.353082 kubelet[2203]: E1002 19:20:20.352965 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:20.355483 kubelet[2203]: E1002 19:20:20.355419 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:21.355914 kubelet[2203]: E1002 19:20:21.355864 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:22.356675 kubelet[2203]: E1002 19:20:22.356611 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:23.357263 kubelet[2203]: E1002 19:20:23.357229 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:24.358721 kubelet[2203]: E1002 19:20:24.358687 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:25.354269 kubelet[2203]: E1002 19:20:25.354224 2203 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:25.360442 kubelet[2203]: E1002 19:20:25.360399 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:26.360914 kubelet[2203]: E1002 19:20:26.360833 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:27.362332 kubelet[2203]: E1002 19:20:27.362270 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:28.363472 kubelet[2203]: E1002 19:20:28.363414 2203 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:28.464535 kubelet[2203]: E1002 19:20:28.464181 2203 controller.go:189] failed to update lease, error: Put "https://172.31.22.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.12?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Oct 2 19:20:28.519165 kubelet[2203]: E1002 19:20:28.518999 2203 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.22.12\": Get \"https://172.31.22.247:6443/api/v1/nodes/172.31.22.12?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"