Oct 2 19:02:28.134179 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 2 19:02:28.134524 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:02:28.134863 kernel: efi: EFI v2.70 by EDK II Oct 2 19:02:28.135081 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71accf98 Oct 2 19:02:28.135100 kernel: ACPI: Early table checksum verification disabled Oct 2 19:02:28.135114 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 2 19:02:28.135130 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 2 19:02:28.135145 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 19:02:28.135158 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 2 19:02:28.135172 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 19:02:28.135191 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 2 19:02:28.135205 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 2 19:02:28.135219 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 2 19:02:28.135233 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 19:02:28.135249 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 2 19:02:28.135268 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 2 19:02:28.135282 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 2 19:02:28.135297 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 2 19:02:28.135311 kernel: printk: bootconsole [uart0] enabled Oct 2 19:02:28.135325 kernel: NUMA: Failed to initialise from firmware Oct 2 19:02:28.135340 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:02:28.135354 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Oct 2 19:02:28.135369 kernel: Zone ranges: Oct 2 19:02:28.135383 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 2 19:02:28.135398 kernel: DMA32 empty Oct 2 19:02:28.135412 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 2 19:02:28.135429 kernel: Movable zone start for each node Oct 2 19:02:28.135444 kernel: Early memory node ranges Oct 2 19:02:28.135458 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Oct 2 19:02:28.135473 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 2 19:02:28.135487 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 2 19:02:28.135501 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 2 19:02:28.135515 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 2 19:02:28.135529 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 2 19:02:28.135543 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:02:28.135557 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 2 19:02:28.135572 kernel: psci: probing for conduit method from ACPI. Oct 2 19:02:28.135586 kernel: psci: PSCIv1.0 detected in firmware. Oct 2 19:02:28.135604 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:02:28.135618 kernel: psci: Trusted OS migration not required Oct 2 19:02:28.135639 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:02:28.135654 kernel: ACPI: SRAT not present Oct 2 19:02:28.135670 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:02:28.135689 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:02:28.135704 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 2 19:02:28.135719 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:02:28.135734 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:02:28.135749 kernel: CPU features: detected: Spectre-v2 Oct 2 19:02:28.135764 kernel: CPU features: detected: Spectre-v3a Oct 2 19:02:28.135779 kernel: CPU features: detected: Spectre-BHB Oct 2 19:02:28.135794 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:02:28.135809 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:02:28.135824 kernel: CPU features: detected: ARM erratum 1742098 Oct 2 19:02:28.135839 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 2 19:02:28.135858 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 2 19:02:28.135873 kernel: Policy zone: Normal Oct 2 19:02:28.135891 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:02:28.135954 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:02:28.135974 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:02:28.135989 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:02:28.136005 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:02:28.136020 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 2 19:02:28.136036 kernel: Memory: 3826444K/4030464K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 204020K reserved, 0K cma-reserved) Oct 2 19:02:28.136052 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:02:28.136072 kernel: trace event string verifier disabled Oct 2 19:02:28.136087 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:02:28.136103 kernel: rcu: RCU event tracing is enabled. Oct 2 19:02:28.136118 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:02:28.146039 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:02:28.146075 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:02:28.146091 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:02:28.146106 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:02:28.146122 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:02:28.146137 kernel: GICv3: 96 SPIs implemented Oct 2 19:02:28.146152 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:02:28.146167 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:02:28.146191 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:02:28.146206 kernel: GICv3: 16 PPIs implemented Oct 2 19:02:28.146222 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 2 19:02:28.146237 kernel: ACPI: SRAT not present Oct 2 19:02:28.146252 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 2 19:02:28.146267 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:02:28.146283 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:02:28.146298 kernel: GICv3: using LPI property table @0x00000004000c0000 Oct 2 19:02:28.146313 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 2 19:02:28.146328 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Oct 2 19:02:28.146343 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 2 19:02:28.146362 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 2 19:02:28.146378 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 2 19:02:28.146393 kernel: Console: colour dummy device 80x25 Oct 2 19:02:28.146409 kernel: printk: console [tty1] enabled Oct 2 19:02:28.146424 kernel: ACPI: Core revision 20210730 Oct 2 19:02:28.146440 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 2 19:02:28.146456 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:02:28.146471 kernel: LSM: Security Framework initializing Oct 2 19:02:28.146487 kernel: SELinux: Initializing. Oct 2 19:02:28.146502 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:02:28.146522 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:02:28.146537 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:02:28.146553 kernel: Platform MSI: ITS@0x10080000 domain created Oct 2 19:02:28.146568 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 2 19:02:28.146583 kernel: Remapping and enabling EFI services. Oct 2 19:02:28.146599 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:02:28.146614 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:02:28.146630 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 2 19:02:28.146645 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Oct 2 19:02:28.146665 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 2 19:02:28.146680 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:02:28.146695 kernel: SMP: Total of 2 processors activated. Oct 2 19:02:28.146711 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:02:28.146726 kernel: CPU features: detected: 32-bit EL1 Support Oct 2 19:02:28.146741 kernel: CPU features: detected: CRC32 instructions Oct 2 19:02:28.146756 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:02:28.146772 kernel: alternatives: patching kernel code Oct 2 19:02:28.146787 kernel: devtmpfs: initialized Oct 2 19:02:28.146805 kernel: KASLR disabled due to lack of seed Oct 2 19:02:28.146822 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:02:28.146838 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:02:28.146864 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:02:28.146884 kernel: SMBIOS 3.0.0 present. Oct 2 19:02:28.146929 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 2 19:02:28.146950 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:02:28.146966 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:02:28.146983 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:02:28.146999 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:02:28.147016 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:02:28.147032 kernel: audit: type=2000 audit(0.248:1): state=initialized audit_enabled=0 res=1 Oct 2 19:02:28.147054 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:02:28.147070 kernel: cpuidle: using governor menu Oct 2 19:02:28.147086 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:02:28.147103 kernel: ASID allocator initialised with 32768 entries Oct 2 19:02:28.147119 kernel: ACPI: bus type PCI registered Oct 2 19:02:28.147139 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:02:28.147155 kernel: Serial: AMBA PL011 UART driver Oct 2 19:02:28.147172 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:02:28.147188 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:02:28.147204 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:02:28.147220 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:02:28.147236 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:02:28.147252 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:02:28.147268 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:02:28.147288 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:02:28.147304 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:02:28.147320 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:02:28.147336 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:02:28.147352 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:02:28.147368 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:02:28.147384 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:02:28.147413 kernel: ACPI: Interpreter enabled Oct 2 19:02:28.147432 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:02:28.147452 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:02:28.147468 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 2 19:02:28.147865 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:02:28.152192 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:02:28.152404 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:02:28.152595 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 2 19:02:28.152786 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 2 19:02:28.152817 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 2 19:02:28.152835 kernel: acpiphp: Slot [1] registered Oct 2 19:02:28.152852 kernel: acpiphp: Slot [2] registered Oct 2 19:02:28.152869 kernel: acpiphp: Slot [3] registered Oct 2 19:02:28.152885 kernel: acpiphp: Slot [4] registered Oct 2 19:02:28.152921 kernel: acpiphp: Slot [5] registered Oct 2 19:02:28.152941 kernel: acpiphp: Slot [6] registered Oct 2 19:02:28.152958 kernel: acpiphp: Slot [7] registered Oct 2 19:02:28.152974 kernel: acpiphp: Slot [8] registered Oct 2 19:02:28.152996 kernel: acpiphp: Slot [9] registered Oct 2 19:02:28.153012 kernel: acpiphp: Slot [10] registered Oct 2 19:02:28.153029 kernel: acpiphp: Slot [11] registered Oct 2 19:02:28.153045 kernel: acpiphp: Slot [12] registered Oct 2 19:02:28.153061 kernel: acpiphp: Slot [13] registered Oct 2 19:02:28.153077 kernel: acpiphp: Slot [14] registered Oct 2 19:02:28.153093 kernel: acpiphp: Slot [15] registered Oct 2 19:02:28.153109 kernel: acpiphp: Slot [16] registered Oct 2 19:02:28.153125 kernel: acpiphp: Slot [17] registered Oct 2 19:02:28.153141 kernel: acpiphp: Slot [18] registered Oct 2 19:02:28.153162 kernel: acpiphp: Slot [19] registered Oct 2 19:02:28.153178 kernel: acpiphp: Slot [20] registered Oct 2 19:02:28.153194 kernel: acpiphp: Slot [21] registered Oct 2 19:02:28.153211 kernel: acpiphp: Slot [22] registered Oct 2 19:02:28.153227 kernel: acpiphp: Slot [23] registered Oct 2 19:02:28.153242 kernel: acpiphp: Slot [24] registered Oct 2 19:02:28.153259 kernel: acpiphp: Slot [25] registered Oct 2 19:02:28.153275 kernel: acpiphp: Slot [26] registered Oct 2 19:02:28.153292 kernel: acpiphp: Slot [27] registered Oct 2 19:02:28.153311 kernel: acpiphp: Slot [28] registered Oct 2 19:02:28.153328 kernel: acpiphp: Slot [29] registered Oct 2 19:02:28.153344 kernel: acpiphp: Slot [30] registered Oct 2 19:02:28.153361 kernel: acpiphp: Slot [31] registered Oct 2 19:02:28.153377 kernel: PCI host bridge to bus 0000:00 Oct 2 19:02:28.153590 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 2 19:02:28.153792 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:02:28.154013 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 2 19:02:28.154200 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 2 19:02:28.154431 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 2 19:02:28.154658 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 2 19:02:28.154868 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 2 19:02:28.155115 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 19:02:28.155317 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 2 19:02:28.155526 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:02:28.155744 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 19:02:28.155980 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 2 19:02:28.156187 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 2 19:02:28.156384 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 2 19:02:28.156582 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:02:28.156786 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 2 19:02:28.157030 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 2 19:02:28.157236 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 2 19:02:28.157433 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 2 19:02:28.157651 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 2 19:02:28.157840 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 2 19:02:28.158078 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:02:28.158267 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 2 19:02:28.158296 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:02:28.158314 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:02:28.158331 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:02:28.158348 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:02:28.158364 kernel: iommu: Default domain type: Translated Oct 2 19:02:28.158381 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:02:28.158397 kernel: vgaarb: loaded Oct 2 19:02:28.158414 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:02:28.158432 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:02:28.158453 kernel: PTP clock support registered Oct 2 19:02:28.158471 kernel: Registered efivars operations Oct 2 19:02:28.158488 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:02:28.158504 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:02:28.158520 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:02:28.158536 kernel: pnp: PnP ACPI init Oct 2 19:02:28.158753 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 2 19:02:28.158778 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:02:28.158795 kernel: NET: Registered PF_INET protocol family Oct 2 19:02:28.158816 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:02:28.158833 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:02:28.158850 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:02:28.158867 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:02:28.158884 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:02:28.160965 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:02:28.161001 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:02:28.161019 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:02:28.161036 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:02:28.161060 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:02:28.161077 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 2 19:02:28.161093 kernel: kvm [1]: HYP mode not available Oct 2 19:02:28.161110 kernel: Initialise system trusted keyrings Oct 2 19:02:28.161127 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:02:28.161143 kernel: Key type asymmetric registered Oct 2 19:02:28.161159 kernel: Asymmetric key parser 'x509' registered Oct 2 19:02:28.161176 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:02:28.161192 kernel: io scheduler mq-deadline registered Oct 2 19:02:28.161212 kernel: io scheduler kyber registered Oct 2 19:02:28.161228 kernel: io scheduler bfq registered Oct 2 19:02:28.161475 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 2 19:02:28.161503 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:02:28.161522 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:02:28.161540 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:02:28.161558 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 2 19:02:28.161780 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 2 19:02:28.161810 kernel: printk: console [ttyS0] disabled Oct 2 19:02:28.161828 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 2 19:02:28.161845 kernel: printk: console [ttyS0] enabled Oct 2 19:02:28.161862 kernel: printk: bootconsole [uart0] disabled Oct 2 19:02:28.161879 kernel: thunder_xcv, ver 1.0 Oct 2 19:02:28.161895 kernel: thunder_bgx, ver 1.0 Oct 2 19:02:28.168278 kernel: nicpf, ver 1.0 Oct 2 19:02:28.168297 kernel: nicvf, ver 1.0 Oct 2 19:02:28.168550 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:02:28.168747 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:02:27 UTC (1696273347) Oct 2 19:02:28.168772 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:02:28.168790 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:02:28.168807 kernel: Segment Routing with IPv6 Oct 2 19:02:28.168825 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:02:28.168841 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:02:28.168858 kernel: Key type dns_resolver registered Oct 2 19:02:28.168875 kernel: registered taskstats version 1 Oct 2 19:02:28.168916 kernel: Loading compiled-in X.509 certificates Oct 2 19:02:28.168940 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:02:28.168957 kernel: Key type .fscrypt registered Oct 2 19:02:28.168973 kernel: Key type fscrypt-provisioning registered Oct 2 19:02:28.168989 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:02:28.169005 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:02:28.169021 kernel: ima: No architecture policies found Oct 2 19:02:28.169037 kernel: Freeing unused kernel memory: 34560K Oct 2 19:02:28.169054 kernel: Run /init as init process Oct 2 19:02:28.169075 kernel: with arguments: Oct 2 19:02:28.169091 kernel: /init Oct 2 19:02:28.169106 kernel: with environment: Oct 2 19:02:28.169122 kernel: HOME=/ Oct 2 19:02:28.169138 kernel: TERM=linux Oct 2 19:02:28.169154 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:02:28.169175 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:02:28.169197 systemd[1]: Detected virtualization amazon. Oct 2 19:02:28.169219 systemd[1]: Detected architecture arm64. Oct 2 19:02:28.169236 systemd[1]: Running in initrd. Oct 2 19:02:28.169253 systemd[1]: No hostname configured, using default hostname. Oct 2 19:02:28.169270 systemd[1]: Hostname set to . Oct 2 19:02:28.169289 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:02:28.169306 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:02:28.169323 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:02:28.169341 systemd[1]: Reached target cryptsetup.target. Oct 2 19:02:28.169362 systemd[1]: Reached target paths.target. Oct 2 19:02:28.169380 systemd[1]: Reached target slices.target. Oct 2 19:02:28.169397 systemd[1]: Reached target swap.target. Oct 2 19:02:28.169415 systemd[1]: Reached target timers.target. Oct 2 19:02:28.169433 systemd[1]: Listening on iscsid.socket. Oct 2 19:02:28.169451 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:02:28.169470 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:02:28.169487 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:02:28.169510 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:02:28.169527 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:02:28.169545 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:02:28.169563 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:02:28.169581 systemd[1]: Reached target sockets.target. Oct 2 19:02:28.169598 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:02:28.169636 systemd[1]: Finished network-cleanup.service. Oct 2 19:02:28.169655 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:02:28.169673 systemd[1]: Starting systemd-journald.service... Oct 2 19:02:28.169695 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:02:28.169713 systemd[1]: Starting systemd-resolved.service... Oct 2 19:02:28.169731 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:02:28.169749 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:02:28.169768 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:02:28.169788 kernel: audit: type=1130 audit(1696273348.134:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.169808 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:02:28.169830 systemd-journald[309]: Journal started Oct 2 19:02:28.178031 systemd-journald[309]: Runtime Journal (/run/log/journal/ec28059fc3721e2a31303b75302e5902) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:02:28.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.160089 systemd-modules-load[310]: Inserted module 'overlay' Oct 2 19:02:28.162326 systemd-resolved[311]: Positive Trust Anchors: Oct 2 19:02:28.162340 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:02:28.162394 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:02:28.217672 systemd[1]: Started systemd-journald.service. Oct 2 19:02:28.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.222963 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:02:28.227122 kernel: audit: type=1130 audit(1696273348.216:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.246733 kernel: audit: type=1130 audit(1696273348.227:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.246802 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:02:28.257928 kernel: Bridge firewalling registered Oct 2 19:02:28.258260 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:02:28.274275 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:02:28.276088 systemd-modules-load[310]: Inserted module 'br_netfilter' Oct 2 19:02:28.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.289941 kernel: audit: type=1130 audit(1696273348.278:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.313939 kernel: SCSI subsystem initialized Oct 2 19:02:28.325995 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:02:28.356085 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:02:28.356124 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:02:28.356147 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:02:28.356170 kernel: audit: type=1130 audit(1696273348.337:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.340503 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:02:28.358805 systemd-modules-load[310]: Inserted module 'dm_multipath' Oct 2 19:02:28.362543 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:02:28.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.380786 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:02:28.398938 kernel: audit: type=1130 audit(1696273348.377:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.421743 dracut-cmdline[328]: dracut-dracut-053 Oct 2 19:02:28.434384 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:02:28.450351 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:02:28.461234 kernel: audit: type=1130 audit(1696273348.450:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.655941 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:02:28.669957 kernel: iscsi: registered transport (tcp) Oct 2 19:02:28.696286 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:02:28.696356 kernel: QLogic iSCSI HBA Driver Oct 2 19:02:28.789971 kernel: random: crng init done Oct 2 19:02:28.790021 systemd-resolved[311]: Defaulting to hostname 'linux'. Oct 2 19:02:28.793802 systemd[1]: Started systemd-resolved.service. Oct 2 19:02:28.797048 systemd[1]: Reached target nss-lookup.target. Oct 2 19:02:28.808723 kernel: audit: type=1130 audit(1696273348.795:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.885386 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:02:28.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.890071 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:02:28.899637 kernel: audit: type=1130 audit(1696273348.886:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:28.983937 kernel: raid6: neonx8 gen() 6408 MB/s Oct 2 19:02:29.001932 kernel: raid6: neonx8 xor() 4701 MB/s Oct 2 19:02:29.019932 kernel: raid6: neonx4 gen() 6613 MB/s Oct 2 19:02:29.037930 kernel: raid6: neonx4 xor() 4887 MB/s Oct 2 19:02:29.055931 kernel: raid6: neonx2 gen() 5840 MB/s Oct 2 19:02:29.073930 kernel: raid6: neonx2 xor() 4480 MB/s Oct 2 19:02:29.091930 kernel: raid6: neonx1 gen() 4519 MB/s Oct 2 19:02:29.109930 kernel: raid6: neonx1 xor() 3674 MB/s Oct 2 19:02:29.127932 kernel: raid6: int64x8 gen() 3437 MB/s Oct 2 19:02:29.145930 kernel: raid6: int64x8 xor() 2079 MB/s Oct 2 19:02:29.163931 kernel: raid6: int64x4 gen() 3854 MB/s Oct 2 19:02:29.181929 kernel: raid6: int64x4 xor() 2192 MB/s Oct 2 19:02:29.199931 kernel: raid6: int64x2 gen() 3625 MB/s Oct 2 19:02:29.217929 kernel: raid6: int64x2 xor() 1948 MB/s Oct 2 19:02:29.235932 kernel: raid6: int64x1 gen() 2770 MB/s Oct 2 19:02:29.255648 kernel: raid6: int64x1 xor() 1451 MB/s Oct 2 19:02:29.255682 kernel: raid6: using algorithm neonx4 gen() 6613 MB/s Oct 2 19:02:29.255706 kernel: raid6: .... xor() 4887 MB/s, rmw enabled Oct 2 19:02:29.257515 kernel: raid6: using neon recovery algorithm Oct 2 19:02:29.275936 kernel: xor: measuring software checksum speed Oct 2 19:02:29.278928 kernel: 8regs : 9334 MB/sec Oct 2 19:02:29.281930 kernel: 32regs : 11107 MB/sec Oct 2 19:02:29.285502 kernel: arm64_neon : 9534 MB/sec Oct 2 19:02:29.285534 kernel: xor: using function: 32regs (11107 MB/sec) Oct 2 19:02:29.375950 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:02:29.414096 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:02:29.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:29.416000 audit: BPF prog-id=7 op=LOAD Oct 2 19:02:29.416000 audit: BPF prog-id=8 op=LOAD Oct 2 19:02:29.418349 systemd[1]: Starting systemd-udevd.service... Oct 2 19:02:29.456809 systemd-udevd[509]: Using default interface naming scheme 'v252'. Oct 2 19:02:29.468155 systemd[1]: Started systemd-udevd.service. Oct 2 19:02:29.475620 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:02:29.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:29.537760 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Oct 2 19:02:29.647512 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:02:29.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:29.651176 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:02:29.769153 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:02:29.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:29.920111 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:02:29.920183 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 2 19:02:29.929692 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 19:02:29.930077 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 19:02:29.939153 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 2 19:02:29.939215 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 19:02:29.949557 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d1:97:cd:0f:c7 Oct 2 19:02:29.949843 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 19:02:29.956573 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:02:29.956621 kernel: GPT:9289727 != 16777215 Oct 2 19:02:29.958917 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:02:29.960264 kernel: GPT:9289727 != 16777215 Oct 2 19:02:29.962236 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:02:29.963822 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:02:29.967758 (udev-worker)[564]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:02:30.053933 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (567) Oct 2 19:02:30.120935 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:02:30.199183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:02:30.232144 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:02:30.249734 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:02:30.307764 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:02:30.312724 systemd[1]: Starting disk-uuid.service... Oct 2 19:02:30.334306 disk-uuid[670]: Primary Header is updated. Oct 2 19:02:30.334306 disk-uuid[670]: Secondary Entries is updated. Oct 2 19:02:30.334306 disk-uuid[670]: Secondary Header is updated. Oct 2 19:02:30.343940 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:02:30.352939 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:02:30.360932 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:02:31.360960 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:02:31.361373 disk-uuid[671]: The operation has completed successfully. Oct 2 19:02:31.660858 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:02:31.674220 kernel: kauditd_printk_skb: 6 callbacks suppressed Oct 2 19:02:31.674259 kernel: audit: type=1130 audit(1696273351.659:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:31.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:31.661087 systemd[1]: Finished disk-uuid.service. Oct 2 19:02:31.683107 kernel: audit: type=1131 audit(1696273351.659:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:31.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:31.665283 systemd[1]: Starting verity-setup.service... Oct 2 19:02:31.720938 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:02:31.815522 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:02:31.820457 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:02:31.831256 systemd[1]: Finished verity-setup.service. Oct 2 19:02:31.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:31.841947 kernel: audit: type=1130 audit(1696273351.833:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:31.917947 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:02:31.919735 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:02:31.922689 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:02:31.926704 systemd[1]: Starting ignition-setup.service... Oct 2 19:02:31.942191 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:02:31.966505 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:02:31.966567 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:02:31.969089 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:02:31.992614 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:02:32.023037 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:02:32.069823 systemd[1]: Finished ignition-setup.service. Oct 2 19:02:32.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.073229 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:02:32.086956 kernel: audit: type=1130 audit(1696273352.070:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.302261 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:02:32.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.312000 audit: BPF prog-id=9 op=LOAD Oct 2 19:02:32.314842 systemd[1]: Starting systemd-networkd.service... Oct 2 19:02:32.318312 kernel: audit: type=1130 audit(1696273352.303:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.318353 kernel: audit: type=1334 audit(1696273352.312:22): prog-id=9 op=LOAD Oct 2 19:02:32.374428 systemd-networkd[1193]: lo: Link UP Oct 2 19:02:32.374451 systemd-networkd[1193]: lo: Gained carrier Oct 2 19:02:32.378422 systemd-networkd[1193]: Enumeration completed Oct 2 19:02:32.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.378963 systemd-networkd[1193]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:02:32.379099 systemd[1]: Started systemd-networkd.service. Oct 2 19:02:32.399742 kernel: audit: type=1130 audit(1696273352.379:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.381030 systemd[1]: Reached target network.target. Oct 2 19:02:32.384667 systemd[1]: Starting iscsiuio.service... Oct 2 19:02:32.399059 systemd-networkd[1193]: eth0: Link UP Oct 2 19:02:32.399068 systemd-networkd[1193]: eth0: Gained carrier Oct 2 19:02:32.414140 systemd-networkd[1193]: eth0: DHCPv4 address 172.31.18.218/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:02:32.416748 systemd[1]: Started iscsiuio.service. Oct 2 19:02:32.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.427299 systemd[1]: Starting iscsid.service... Oct 2 19:02:32.433407 kernel: audit: type=1130 audit(1696273352.420:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.442840 iscsid[1198]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:02:32.442840 iscsid[1198]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:02:32.442840 iscsid[1198]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:02:32.442840 iscsid[1198]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:02:32.475010 kernel: audit: type=1130 audit(1696273352.462:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.475121 iscsid[1198]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:02:32.475121 iscsid[1198]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:02:32.457884 systemd[1]: Started iscsid.service. Oct 2 19:02:32.482552 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:02:32.528525 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:02:32.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.531941 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:02:32.549439 kernel: audit: type=1130 audit(1696273352.530:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.541003 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:02:32.542864 systemd[1]: Reached target remote-fs.target. Oct 2 19:02:32.546048 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:02:32.583072 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:02:32.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.686108 ignition[1112]: Ignition 2.14.0 Oct 2 19:02:32.686134 ignition[1112]: Stage: fetch-offline Oct 2 19:02:32.686544 ignition[1112]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:02:32.686604 ignition[1112]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:02:32.709172 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:02:32.710062 ignition[1112]: Ignition finished successfully Oct 2 19:02:32.715356 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:02:32.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.720320 systemd[1]: Starting ignition-fetch.service... Oct 2 19:02:32.750121 ignition[1217]: Ignition 2.14.0 Oct 2 19:02:32.750151 ignition[1217]: Stage: fetch Oct 2 19:02:32.750500 ignition[1217]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:02:32.750559 ignition[1217]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:02:32.765097 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:02:32.768812 ignition[1217]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:02:32.773083 ignition[1217]: INFO : PUT result: OK Oct 2 19:02:32.775984 ignition[1217]: DEBUG : parsed url from cmdline: "" Oct 2 19:02:32.775984 ignition[1217]: INFO : no config URL provided Oct 2 19:02:32.775984 ignition[1217]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:02:32.782386 ignition[1217]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 19:02:32.782386 ignition[1217]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:02:32.782386 ignition[1217]: INFO : PUT result: OK Oct 2 19:02:32.782386 ignition[1217]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 19:02:32.782386 ignition[1217]: INFO : GET result: OK Oct 2 19:02:32.801744 ignition[1217]: DEBUG : parsing config with SHA512: 89bf1e0301491641711ff9ca3d34e8c376fea66b5dcd73009659d6c97d937eb3675ad22cfb245c5170dd232438a10420a0b07257bde3305a3bbde2b8b4e888a8 Oct 2 19:02:32.820934 unknown[1217]: fetched base config from "system" Oct 2 19:02:32.821208 unknown[1217]: fetched base config from "system" Oct 2 19:02:32.822329 ignition[1217]: fetch: fetch complete Oct 2 19:02:32.821223 unknown[1217]: fetched user config from "aws" Oct 2 19:02:32.822344 ignition[1217]: fetch: fetch passed Oct 2 19:02:32.822431 ignition[1217]: Ignition finished successfully Oct 2 19:02:32.833392 systemd[1]: Finished ignition-fetch.service. Oct 2 19:02:32.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.837919 systemd[1]: Starting ignition-kargs.service... Oct 2 19:02:32.872200 ignition[1223]: Ignition 2.14.0 Oct 2 19:02:32.872228 ignition[1223]: Stage: kargs Oct 2 19:02:32.872587 ignition[1223]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:02:32.872652 ignition[1223]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:02:32.889545 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:02:32.891885 ignition[1223]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:02:32.895200 ignition[1223]: INFO : PUT result: OK Oct 2 19:02:32.900171 ignition[1223]: kargs: kargs passed Oct 2 19:02:32.900443 ignition[1223]: Ignition finished successfully Oct 2 19:02:32.905462 systemd[1]: Finished ignition-kargs.service. Oct 2 19:02:32.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.910180 systemd[1]: Starting ignition-disks.service... Oct 2 19:02:32.938974 ignition[1229]: Ignition 2.14.0 Oct 2 19:02:32.939004 ignition[1229]: Stage: disks Oct 2 19:02:32.939370 ignition[1229]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:02:32.939427 ignition[1229]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:02:32.955971 ignition[1229]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:02:32.958937 ignition[1229]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:02:32.961551 ignition[1229]: INFO : PUT result: OK Oct 2 19:02:32.966428 ignition[1229]: disks: disks passed Oct 2 19:02:32.966531 ignition[1229]: Ignition finished successfully Oct 2 19:02:32.970950 systemd[1]: Finished ignition-disks.service. Oct 2 19:02:32.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:32.974216 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:02:32.977699 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:02:32.980808 systemd[1]: Reached target local-fs.target. Oct 2 19:02:32.982530 systemd[1]: Reached target sysinit.target. Oct 2 19:02:32.985499 systemd[1]: Reached target basic.target. Oct 2 19:02:32.991266 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:02:33.048454 systemd-fsck[1237]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:02:33.058388 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:02:33.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:33.062222 systemd[1]: Mounting sysroot.mount... Oct 2 19:02:33.093941 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:02:33.095571 systemd[1]: Mounted sysroot.mount. Oct 2 19:02:33.097360 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:02:33.103637 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:02:33.107522 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:02:33.110622 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:02:33.112564 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:02:33.128283 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:02:33.141505 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:02:33.146845 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:02:33.172941 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1254) Oct 2 19:02:33.180187 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:02:33.180251 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:02:33.183503 initrd-setup-root[1259]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:02:33.186351 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:02:33.196714 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:02:33.201599 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:02:33.213395 initrd-setup-root[1285]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:02:33.231721 initrd-setup-root[1293]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:02:33.251190 initrd-setup-root[1301]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:02:33.468962 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:02:33.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:33.473616 systemd[1]: Starting ignition-mount.service... Oct 2 19:02:33.491207 systemd[1]: Starting sysroot-boot.service... Oct 2 19:02:33.507679 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:02:33.510004 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:02:33.549308 systemd[1]: Finished sysroot-boot.service. Oct 2 19:02:33.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:33.553663 ignition[1320]: INFO : Ignition 2.14.0 Oct 2 19:02:33.557350 ignition[1320]: INFO : Stage: mount Oct 2 19:02:33.557350 ignition[1320]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:02:33.557350 ignition[1320]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:02:33.574980 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:02:33.574980 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:02:33.593806 ignition[1320]: INFO : PUT result: OK Oct 2 19:02:33.598821 ignition[1320]: INFO : mount: mount passed Oct 2 19:02:33.600529 ignition[1320]: INFO : Ignition finished successfully Oct 2 19:02:33.603713 systemd[1]: Finished ignition-mount.service. Oct 2 19:02:33.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:33.608284 systemd[1]: Starting ignition-files.service... Oct 2 19:02:33.634242 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:02:33.659028 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1329) Oct 2 19:02:33.664927 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:02:33.664966 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:02:33.664990 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:02:33.673938 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:02:33.678759 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:02:33.714116 ignition[1348]: INFO : Ignition 2.14.0 Oct 2 19:02:33.716095 ignition[1348]: INFO : Stage: files Oct 2 19:02:33.718692 ignition[1348]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:02:33.721388 ignition[1348]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:02:33.740706 ignition[1348]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:02:33.743735 ignition[1348]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:02:33.746728 ignition[1348]: INFO : PUT result: OK Oct 2 19:02:33.751811 ignition[1348]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:02:33.756161 ignition[1348]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:02:33.756161 ignition[1348]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:02:33.794529 ignition[1348]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:02:33.797459 ignition[1348]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:02:33.801274 unknown[1348]: wrote ssh authorized keys file for user: core Oct 2 19:02:33.803579 ignition[1348]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:02:33.807199 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:02:33.811188 ignition[1348]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:02:34.062073 systemd-networkd[1193]: eth0: Gained IPv6LL Oct 2 19:02:41.492993 ignition[1348]: INFO : GET result: OK Oct 2 19:02:41.966774 ignition[1348]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:02:41.971535 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:02:41.971535 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:02:41.979293 ignition[1348]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-arm64.tar.gz: attempt #1 Oct 2 19:02:42.081885 ignition[1348]: INFO : GET result: OK Oct 2 19:02:42.252880 ignition[1348]: DEBUG : file matches expected sum of: ebd055e9b2888624d006decd582db742131ed815d059d529ba21eaf864becca98a84b20a10eec91051b9d837c6855d28d5042bf5e9a454f4540aec6b82d37e96 Oct 2 19:02:42.257648 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:02:42.257648 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:02:42.264979 ignition[1348]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:02:42.277533 ignition[1348]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2364252992" Oct 2 19:02:42.280493 ignition[1348]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2364252992": device or resource busy Oct 2 19:02:42.283817 ignition[1348]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2364252992", trying btrfs: device or resource busy Oct 2 19:02:42.287495 ignition[1348]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2364252992" Oct 2 19:02:42.294412 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1350) Oct 2 19:02:42.295148 ignition[1348]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2364252992" Oct 2 19:02:42.338410 ignition[1348]: INFO : op(3): [started] unmounting "/mnt/oem2364252992" Oct 2 19:02:42.340772 ignition[1348]: INFO : op(3): [finished] unmounting "/mnt/oem2364252992" Oct 2 19:02:42.343705 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:02:42.343705 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:02:42.343705 ignition[1348]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:02:42.345299 systemd[1]: mnt-oem2364252992.mount: Deactivated successfully. Oct 2 19:02:42.456442 ignition[1348]: INFO : GET result: OK Oct 2 19:02:43.366788 ignition[1348]: DEBUG : file matches expected sum of: daab8965a4f617d1570d04c031ab4d55fff6aa13a61f0e4045f2338947f9fb0ee3a80fdee57cfe86db885390595460342181e1ec52b89f127ef09c393ae3db7f Oct 2 19:02:43.372078 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:02:43.372078 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:02:43.372078 ignition[1348]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:02:43.432765 ignition[1348]: INFO : GET result: OK Oct 2 19:02:45.541420 ignition[1348]: DEBUG : file matches expected sum of: 7b872a34d86e8aa75455a62a20f5cf16426de2ae54ffb8e0250fead920838df818201b8512c2f8bf4c939e5b21babab371f3a48803e2e861da9e6f8cdd022324 Oct 2 19:02:45.546624 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:02:45.546624 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:02:45.546624 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:02:45.546624 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:02:45.573653 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:02:45.573653 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:02:45.573653 ignition[1348]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:02:45.573653 ignition[1348]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1169480313" Oct 2 19:02:45.573653 ignition[1348]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1169480313": device or resource busy Oct 2 19:02:45.573653 ignition[1348]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1169480313", trying btrfs: device or resource busy Oct 2 19:02:45.573653 ignition[1348]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1169480313" Oct 2 19:02:45.573653 ignition[1348]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1169480313" Oct 2 19:02:45.619414 ignition[1348]: INFO : op(6): [started] unmounting "/mnt/oem1169480313" Oct 2 19:02:45.619414 ignition[1348]: INFO : op(6): [finished] unmounting "/mnt/oem1169480313" Oct 2 19:02:45.619414 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:02:45.619414 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:02:45.619414 ignition[1348]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:02:45.588694 systemd[1]: mnt-oem1169480313.mount: Deactivated successfully. Oct 2 19:02:45.636959 ignition[1348]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1253084333" Oct 2 19:02:45.636959 ignition[1348]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1253084333": device or resource busy Oct 2 19:02:45.636959 ignition[1348]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1253084333", trying btrfs: device or resource busy Oct 2 19:02:45.636959 ignition[1348]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1253084333" Oct 2 19:02:45.657018 ignition[1348]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1253084333" Oct 2 19:02:45.657018 ignition[1348]: INFO : op(9): [started] unmounting "/mnt/oem1253084333" Oct 2 19:02:45.667046 ignition[1348]: INFO : op(9): [finished] unmounting "/mnt/oem1253084333" Oct 2 19:02:45.667046 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:02:45.667046 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:02:45.667046 ignition[1348]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:02:45.664828 systemd[1]: mnt-oem1253084333.mount: Deactivated successfully. Oct 2 19:02:45.697436 ignition[1348]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3612913678" Oct 2 19:02:45.700454 ignition[1348]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3612913678": device or resource busy Oct 2 19:02:45.700454 ignition[1348]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3612913678", trying btrfs: device or resource busy Oct 2 19:02:45.700454 ignition[1348]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3612913678" Oct 2 19:02:45.719232 ignition[1348]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3612913678" Oct 2 19:02:45.722193 ignition[1348]: INFO : op(c): [started] unmounting "/mnt/oem3612913678" Oct 2 19:02:45.724379 ignition[1348]: INFO : op(c): [finished] unmounting "/mnt/oem3612913678" Oct 2 19:02:45.724379 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(e): [started] processing unit "amazon-ssm-agent.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(e): op(f): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(e): op(f): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(e): [finished] processing unit "amazon-ssm-agent.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(10): [started] processing unit "nvidia.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(10): [finished] processing unit "nvidia.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(15): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(15): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:02:45.736826 ignition[1348]: INFO : files: op(16): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:02:45.797179 ignition[1348]: INFO : files: op(16): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:02:45.797179 ignition[1348]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Oct 2 19:02:45.797179 ignition[1348]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:02:45.797179 ignition[1348]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:02:45.797179 ignition[1348]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:02:45.797179 ignition[1348]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:02:45.797179 ignition[1348]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:02:45.797179 ignition[1348]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:02:45.797179 ignition[1348]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:02:45.797179 ignition[1348]: INFO : files: files passed Oct 2 19:02:45.797179 ignition[1348]: INFO : Ignition finished successfully Oct 2 19:02:45.821166 systemd[1]: Finished ignition-files.service. Oct 2 19:02:45.840937 kernel: kauditd_printk_skb: 9 callbacks suppressed Oct 2 19:02:45.840998 kernel: audit: type=1130 audit(1696273365.829:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.841972 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:02:45.852011 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:02:45.857319 systemd[1]: Starting ignition-quench.service... Oct 2 19:02:45.874264 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:02:45.876652 systemd[1]: Finished ignition-quench.service. Oct 2 19:02:45.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.894768 kernel: audit: type=1130 audit(1696273365.878:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.894808 kernel: audit: type=1131 audit(1696273365.878:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.909455 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:02:45.914954 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:02:45.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.919100 systemd[1]: Reached target ignition-complete.target. Oct 2 19:02:45.935833 kernel: audit: type=1130 audit(1696273365.917:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.929956 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:02:45.981032 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:02:45.981392 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:02:46.003275 kernel: audit: type=1130 audit(1696273365.983:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.017574 kernel: audit: type=1131 audit(1696273365.991:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:45.992630 systemd[1]: Reached target initrd-fs.target. Oct 2 19:02:46.001837 systemd[1]: Reached target initrd.target. Oct 2 19:02:46.004857 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:02:46.006322 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:02:46.050641 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:02:46.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.055692 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:02:46.065049 kernel: audit: type=1130 audit(1696273366.050:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.085026 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:02:46.088592 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:02:46.092379 systemd[1]: Stopped target timers.target. Oct 2 19:02:46.095592 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:02:46.097770 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:02:46.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.101473 systemd[1]: Stopped target initrd.target. Oct 2 19:02:46.110120 kernel: audit: type=1131 audit(1696273366.099:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.111859 systemd[1]: Stopped target basic.target. Oct 2 19:02:46.115057 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:02:46.118755 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:02:46.122406 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:02:46.126162 systemd[1]: Stopped target remote-fs.target. Oct 2 19:02:46.129511 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:02:46.133055 systemd[1]: Stopped target sysinit.target. Oct 2 19:02:46.136358 systemd[1]: Stopped target local-fs.target. Oct 2 19:02:46.139647 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:02:46.143090 systemd[1]: Stopped target swap.target. Oct 2 19:02:46.146145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:02:46.148320 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:02:46.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.151759 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:02:46.162477 kernel: audit: type=1131 audit(1696273366.150:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.162711 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:02:46.164729 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:02:46.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.168108 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:02:46.168331 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:02:46.176971 kernel: audit: type=1131 audit(1696273366.166:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.181509 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:02:46.183573 systemd[1]: Stopped ignition-files.service. Oct 2 19:02:46.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.188170 systemd[1]: Stopping ignition-mount.service... Oct 2 19:02:46.193938 systemd[1]: Stopping iscsiuio.service... Oct 2 19:02:46.198331 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:02:46.213547 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:02:46.215984 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:02:46.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.219665 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:02:46.221892 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:02:46.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.226709 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:02:46.251039 ignition[1387]: INFO : Ignition 2.14.0 Oct 2 19:02:46.251039 ignition[1387]: INFO : Stage: umount Oct 2 19:02:46.251039 ignition[1387]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:02:46.251039 ignition[1387]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:02:46.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.226983 systemd[1]: Stopped iscsiuio.service. Oct 2 19:02:46.267166 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:02:46.269714 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:02:46.272718 ignition[1387]: INFO : PUT result: OK Oct 2 19:02:46.277734 ignition[1387]: INFO : umount: umount passed Oct 2 19:02:46.279610 ignition[1387]: INFO : Ignition finished successfully Oct 2 19:02:46.283559 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:02:46.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.283745 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:02:46.287715 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:02:46.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.287880 systemd[1]: Stopped ignition-mount.service. Oct 2 19:02:46.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.302743 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:02:46.302848 systemd[1]: Stopped ignition-disks.service. Oct 2 19:02:46.309630 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:02:46.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.309730 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:02:46.321033 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:02:46.321126 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:02:46.322829 systemd[1]: Stopped target network.target. Oct 2 19:02:46.324448 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:02:46.324532 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:02:46.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.326429 systemd[1]: Stopped target paths.target. Oct 2 19:02:46.341544 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:02:46.343069 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:02:46.345039 systemd[1]: Stopped target slices.target. Oct 2 19:02:46.359291 systemd[1]: Stopped target sockets.target. Oct 2 19:02:46.362321 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:02:46.362720 systemd[1]: Closed iscsid.socket. Oct 2 19:02:46.363985 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:02:46.364059 systemd[1]: Closed iscsiuio.socket. Oct 2 19:02:46.368634 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:02:46.368720 systemd[1]: Stopped ignition-setup.service. Oct 2 19:02:46.370708 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:02:46.372327 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:02:46.374277 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:02:46.374445 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:02:46.376181 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:02:46.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.376264 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:02:46.423000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:02:46.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.393593 systemd-networkd[1193]: eth0: DHCPv6 lease lost Oct 2 19:02:46.437000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:02:46.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.396869 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:02:46.397142 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:02:46.403876 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:02:46.404077 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:02:46.405406 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:02:46.405504 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:02:46.407296 systemd[1]: Stopping network-cleanup.service... Oct 2 19:02:46.424998 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:02:46.425115 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:02:46.427076 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:02:46.427168 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:02:46.429003 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:02:46.429082 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:02:46.431097 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:02:46.436892 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:02:46.437275 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:02:46.455280 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:02:46.455372 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:02:46.466230 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:02:46.466305 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:02:46.473714 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:02:46.475233 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:02:46.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.493665 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:02:46.493747 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:02:46.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.497154 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:02:46.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.498602 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:02:46.505282 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:02:46.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.514063 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:02:46.514188 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:02:46.516439 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:02:46.516526 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:02:46.518350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:02:46.518427 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:02:46.520885 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:02:46.521134 systemd[1]: Stopped network-cleanup.service. Oct 2 19:02:46.550612 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:02:46.551188 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:02:46.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:46.555798 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:02:46.559850 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:02:46.576111 systemd[1]: mnt-oem3612913678.mount: Deactivated successfully. Oct 2 19:02:46.576583 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:02:46.576698 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:02:46.576803 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:02:46.592501 systemd[1]: Switching root. Oct 2 19:02:46.624977 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Oct 2 19:02:46.625057 iscsid[1198]: iscsid shutting down. Oct 2 19:02:46.626750 systemd-journald[309]: Journal stopped Oct 2 19:02:52.097632 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:02:52.098212 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:02:52.098313 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:02:52.098418 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:02:52.098451 kernel: SELinux: policy capability open_perms=1 Oct 2 19:02:52.098489 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:02:52.098523 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:02:52.098553 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:02:52.098583 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:02:52.098612 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:02:52.098644 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:02:52.098677 systemd[1]: Successfully loaded SELinux policy in 95.955ms. Oct 2 19:02:52.104375 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.699ms. Oct 2 19:02:52.104435 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:02:52.104475 systemd[1]: Detected virtualization amazon. Oct 2 19:02:52.104507 systemd[1]: Detected architecture arm64. Oct 2 19:02:52.104538 systemd[1]: Detected first boot. Oct 2 19:02:52.104570 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:02:52.104602 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:02:52.104635 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:02:52.104673 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:02:52.104713 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:02:52.104810 kernel: kauditd_printk_skb: 39 callbacks suppressed Oct 2 19:02:52.104845 kernel: audit: type=1334 audit(1696273371.618:85): prog-id=12 op=LOAD Oct 2 19:02:52.104877 kernel: audit: type=1334 audit(1696273371.621:86): prog-id=3 op=UNLOAD Oct 2 19:02:52.104927 kernel: audit: type=1334 audit(1696273371.623:87): prog-id=13 op=LOAD Oct 2 19:02:52.104963 kernel: audit: type=1334 audit(1696273371.626:88): prog-id=14 op=LOAD Oct 2 19:02:52.104994 kernel: audit: type=1334 audit(1696273371.626:89): prog-id=4 op=UNLOAD Oct 2 19:02:52.105025 kernel: audit: type=1334 audit(1696273371.626:90): prog-id=5 op=UNLOAD Oct 2 19:02:52.105060 kernel: audit: type=1334 audit(1696273371.628:91): prog-id=15 op=LOAD Oct 2 19:02:52.105090 kernel: audit: type=1334 audit(1696273371.628:92): prog-id=12 op=UNLOAD Oct 2 19:02:52.105117 kernel: audit: type=1334 audit(1696273371.631:93): prog-id=16 op=LOAD Oct 2 19:02:52.105149 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:02:52.105182 kernel: audit: type=1334 audit(1696273371.634:94): prog-id=17 op=LOAD Oct 2 19:02:52.105213 systemd[1]: Stopped iscsid.service. Oct 2 19:02:52.105243 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:02:52.105272 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:02:52.105302 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:02:52.105337 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:02:52.105367 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:02:52.105414 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:02:52.105447 systemd[1]: Created slice system-getty.slice. Oct 2 19:02:52.105545 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:02:52.105577 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:02:52.105608 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:02:52.105645 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:02:52.105676 systemd[1]: Created slice user.slice. Oct 2 19:02:52.105710 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:02:52.105740 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:02:52.107712 systemd[1]: Set up automount boot.automount. Oct 2 19:02:52.107770 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:02:52.107803 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:02:52.107838 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:02:52.107867 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:02:52.107937 systemd[1]: Reached target integritysetup.target. Oct 2 19:02:52.107972 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:02:52.108005 systemd[1]: Reached target remote-fs.target. Oct 2 19:02:52.108034 systemd[1]: Reached target slices.target. Oct 2 19:02:52.108064 systemd[1]: Reached target swap.target. Oct 2 19:02:52.108093 systemd[1]: Reached target torcx.target. Oct 2 19:02:52.108123 systemd[1]: Reached target veritysetup.target. Oct 2 19:02:52.108154 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:02:52.108183 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:02:52.108215 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:02:52.108316 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:02:52.108349 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:02:52.108378 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:02:52.108408 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:02:52.108444 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:02:52.108478 systemd[1]: Mounting media.mount... Oct 2 19:02:52.108513 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:02:52.108543 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:02:52.108573 systemd[1]: Mounting tmp.mount... Oct 2 19:02:52.108606 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:02:52.108639 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:02:52.108669 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:02:52.108701 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:02:52.108731 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:02:52.108760 systemd[1]: Starting modprobe@drm.service... Oct 2 19:02:52.108792 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:02:52.108821 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:02:52.108850 systemd[1]: Starting modprobe@loop.service... Oct 2 19:02:52.108885 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:02:52.108952 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:02:52.108984 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:02:52.109015 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:02:52.109045 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:02:52.109076 systemd[1]: Stopped systemd-journald.service. Oct 2 19:02:52.109107 systemd[1]: Starting systemd-journald.service... Oct 2 19:02:52.109138 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:02:52.109170 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:02:52.109206 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:02:52.109237 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:02:52.109268 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:02:52.109300 systemd[1]: Stopped verity-setup.service. Oct 2 19:02:52.109329 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:02:52.109359 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:02:52.109405 systemd[1]: Mounted media.mount. Oct 2 19:02:52.109442 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:02:52.109472 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:02:52.119769 systemd[1]: Mounted tmp.mount. Oct 2 19:02:52.119839 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:02:52.119871 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:02:52.119917 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:02:52.119951 kernel: loop: module loaded Oct 2 19:02:52.119982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:02:52.120011 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:02:52.120041 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:02:52.120079 kernel: fuse: init (API version 7.34) Oct 2 19:02:52.120108 systemd[1]: Finished modprobe@drm.service. Oct 2 19:02:52.120138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:02:52.120170 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:02:52.120199 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:02:52.120229 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:02:52.120262 systemd[1]: Finished modprobe@loop.service. Oct 2 19:02:52.120295 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:02:52.120325 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:02:52.120357 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:02:52.120390 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:02:52.120426 systemd[1]: Reached target network-pre.target. Oct 2 19:02:52.120463 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:02:52.120505 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:02:52.120541 systemd-journald[1495]: Journal started Oct 2 19:02:52.120650 systemd-journald[1495]: Runtime Journal (/run/log/journal/ec28059fc3721e2a31303b75302e5902) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:02:47.269000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:02:47.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:02:47.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:02:47.409000 audit: BPF prog-id=10 op=LOAD Oct 2 19:02:47.409000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:02:47.410000 audit: BPF prog-id=11 op=LOAD Oct 2 19:02:47.410000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:02:51.618000 audit: BPF prog-id=12 op=LOAD Oct 2 19:02:51.621000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:02:51.623000 audit: BPF prog-id=13 op=LOAD Oct 2 19:02:51.626000 audit: BPF prog-id=14 op=LOAD Oct 2 19:02:51.626000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:02:51.626000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:02:51.628000 audit: BPF prog-id=15 op=LOAD Oct 2 19:02:51.628000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:02:51.631000 audit: BPF prog-id=16 op=LOAD Oct 2 19:02:51.634000 audit: BPF prog-id=17 op=LOAD Oct 2 19:02:51.634000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:02:51.634000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:02:51.636000 audit: BPF prog-id=18 op=LOAD Oct 2 19:02:51.636000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:02:51.639000 audit: BPF prog-id=19 op=LOAD Oct 2 19:02:51.641000 audit: BPF prog-id=20 op=LOAD Oct 2 19:02:51.641000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:02:51.641000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:02:51.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:51.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:51.655000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:02:51.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:51.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:51.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:51.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:51.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:51.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:51.947000 audit: BPF prog-id=21 op=LOAD Oct 2 19:02:51.947000 audit: BPF prog-id=22 op=LOAD Oct 2 19:02:51.947000 audit: BPF prog-id=23 op=LOAD Oct 2 19:02:51.947000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:02:51.947000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:02:51.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.089000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:02:52.089000 audit[1495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc9ec8640 a2=4000 a3=1 items=0 ppid=1 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:02:52.089000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:02:52.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:51.616970 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:02:47.605555 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:02:51.643732 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:02:47.616106 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:02:47.616157 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:02:47.616223 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:02:47.616249 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:02:47.616314 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:02:47.616345 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:02:47.616745 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:02:47.616819 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:02:47.616855 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:02:47.617838 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:02:47.617944 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:02:47.617990 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:02:47.618030 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:02:47.618075 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:02:47.618113 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:02:50.744141 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:50Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:02:50.744672 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:50Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:02:50.744936 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:50Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:02:50.745394 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:50Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:02:50.745518 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:50Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:02:50.745657 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2023-10-02T19:02:50Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:02:52.142287 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:02:52.148815 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:02:52.148918 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:02:52.165684 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:02:52.165776 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:02:52.177102 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:02:52.184450 systemd[1]: Started systemd-journald.service. Oct 2 19:02:52.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.186341 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:02:52.189197 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:02:52.193597 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:02:52.240446 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:02:52.242626 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:02:52.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.273809 systemd-journald[1495]: Time spent on flushing to /var/log/journal/ec28059fc3721e2a31303b75302e5902 is 52.101ms for 1154 entries. Oct 2 19:02:52.273809 systemd-journald[1495]: System Journal (/var/log/journal/ec28059fc3721e2a31303b75302e5902) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:02:52.335063 systemd-journald[1495]: Received client request to flush runtime journal. Oct 2 19:02:52.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.297317 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:02:52.339382 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:02:52.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.367157 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:02:52.372472 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:02:52.417715 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:02:52.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.421962 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:02:52.449436 udevadm[1539]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:02:52.534475 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:02:52.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:52.538774 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:02:52.655546 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:02:52.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:53.214736 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:02:53.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:53.216000 audit: BPF prog-id=24 op=LOAD Oct 2 19:02:53.216000 audit: BPF prog-id=25 op=LOAD Oct 2 19:02:53.216000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:02:53.217000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:02:53.219031 systemd[1]: Starting systemd-udevd.service... Oct 2 19:02:53.266516 systemd-udevd[1542]: Using default interface naming scheme 'v252'. Oct 2 19:02:53.299000 systemd[1]: Started systemd-udevd.service. Oct 2 19:02:53.303828 systemd[1]: Starting systemd-networkd.service... Oct 2 19:02:53.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:53.301000 audit: BPF prog-id=26 op=LOAD Oct 2 19:02:53.314000 audit: BPF prog-id=27 op=LOAD Oct 2 19:02:53.315000 audit: BPF prog-id=28 op=LOAD Oct 2 19:02:53.315000 audit: BPF prog-id=29 op=LOAD Oct 2 19:02:53.320076 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:02:53.446881 systemd[1]: Started systemd-userdbd.service. Oct 2 19:02:53.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:53.459144 (udev-worker)[1555]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:02:53.487421 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:02:53.645225 systemd-networkd[1547]: lo: Link UP Oct 2 19:02:53.645247 systemd-networkd[1547]: lo: Gained carrier Oct 2 19:02:53.646233 systemd-networkd[1547]: Enumeration completed Oct 2 19:02:53.646397 systemd[1]: Started systemd-networkd.service. Oct 2 19:02:53.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:53.650517 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:02:53.656192 systemd-networkd[1547]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:02:53.661936 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:02:53.662600 systemd-networkd[1547]: eth0: Link UP Oct 2 19:02:53.662895 systemd-networkd[1547]: eth0: Gained carrier Oct 2 19:02:53.678287 systemd-networkd[1547]: eth0: DHCPv4 address 172.31.18.218/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:02:53.786957 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1545) Oct 2 19:02:53.972042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:02:53.974776 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:02:53.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:53.979705 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:02:54.014619 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:02:54.054935 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:02:54.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.057097 systemd[1]: Reached target cryptsetup.target. Oct 2 19:02:54.061141 systemd[1]: Starting lvm2-activation.service... Oct 2 19:02:54.075839 lvm[1662]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:02:54.113972 systemd[1]: Finished lvm2-activation.service. Oct 2 19:02:54.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.116004 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:02:54.117786 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:02:54.117845 systemd[1]: Reached target local-fs.target. Oct 2 19:02:54.119707 systemd[1]: Reached target machines.target. Oct 2 19:02:54.123734 systemd[1]: Starting ldconfig.service... Oct 2 19:02:54.138490 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:02:54.138598 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:02:54.140804 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:02:54.146198 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:02:54.151912 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:02:54.153961 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:02:54.154077 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:02:54.157715 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:02:54.194527 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1664 (bootctl) Oct 2 19:02:54.196744 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:02:54.210690 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:02:54.219302 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:02:54.226093 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:02:54.232294 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:02:54.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.335465 systemd-fsck[1673]: fsck.fat 4.2 (2021-01-31) Oct 2 19:02:54.335465 systemd-fsck[1673]: /dev/nvme0n1p1: 236 files, 113463/258078 clusters Oct 2 19:02:54.343649 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:02:54.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.348594 systemd[1]: Mounting boot.mount... Oct 2 19:02:54.381027 systemd[1]: Mounted boot.mount. Oct 2 19:02:54.407145 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:02:54.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.587837 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:02:54.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.592321 systemd[1]: Starting audit-rules.service... Oct 2 19:02:54.601265 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:02:54.605414 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:02:54.608000 audit: BPF prog-id=30 op=LOAD Oct 2 19:02:54.614000 audit: BPF prog-id=31 op=LOAD Oct 2 19:02:54.611372 systemd[1]: Starting systemd-resolved.service... Oct 2 19:02:54.619273 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:02:54.623162 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:02:54.654316 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:02:54.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.656457 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:02:54.676000 audit[1692]: SYSTEM_BOOT pid=1692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.691951 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:02:54.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.723159 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:02:54.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.821464 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:02:54.823611 systemd[1]: Reached target time-set.target. Oct 2 19:02:54.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:02:54.839798 systemd-resolved[1690]: Positive Trust Anchors: Oct 2 19:02:54.840395 systemd-resolved[1690]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:02:54.840548 systemd-resolved[1690]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:02:54.845000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:02:54.845000 audit[1708]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcd592a20 a2=420 a3=0 items=0 ppid=1687 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:02:54.845000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:02:54.846707 augenrules[1708]: No rules Oct 2 19:02:54.848396 systemd[1]: Finished audit-rules.service. Oct 2 19:02:54.891507 systemd-resolved[1690]: Defaulting to hostname 'linux'. Oct 2 19:02:54.895261 systemd[1]: Started systemd-resolved.service. Oct 2 19:02:54.897186 systemd[1]: Reached target network.target. Oct 2 19:02:54.898880 systemd[1]: Reached target nss-lookup.target. Oct 2 19:02:54.918051 systemd-timesyncd[1691]: Contacted time server 45.55.58.103:123 (0.flatcar.pool.ntp.org). Oct 2 19:02:54.918752 systemd-timesyncd[1691]: Initial clock synchronization to Mon 2023-10-02 19:02:54.591942 UTC. Oct 2 19:02:55.376313 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:02:55.379367 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:02:55.415851 ldconfig[1663]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:02:55.424172 systemd[1]: Finished ldconfig.service. Oct 2 19:02:55.428033 systemd[1]: Starting systemd-update-done.service... Oct 2 19:02:55.449464 systemd[1]: Finished systemd-update-done.service. Oct 2 19:02:55.451606 systemd[1]: Reached target sysinit.target. Oct 2 19:02:55.453439 systemd[1]: Started motdgen.path. Oct 2 19:02:55.455045 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:02:55.457618 systemd[1]: Started logrotate.timer. Oct 2 19:02:55.459488 systemd[1]: Started mdadm.timer. Oct 2 19:02:55.461391 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:02:55.463178 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:02:55.463358 systemd[1]: Reached target paths.target. Oct 2 19:02:55.465003 systemd[1]: Reached target timers.target. Oct 2 19:02:55.467584 systemd[1]: Listening on dbus.socket. Oct 2 19:02:55.471275 systemd[1]: Starting docker.socket... Oct 2 19:02:55.479517 systemd[1]: Listening on sshd.socket. Oct 2 19:02:55.481531 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:02:55.482493 systemd[1]: Listening on docker.socket. Oct 2 19:02:55.484383 systemd[1]: Reached target sockets.target. Oct 2 19:02:55.486476 systemd[1]: Reached target basic.target. Oct 2 19:02:55.488566 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:02:55.488755 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:02:55.500394 systemd[1]: Starting containerd.service... Oct 2 19:02:55.507711 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:02:55.511673 systemd[1]: Starting dbus.service... Oct 2 19:02:55.515088 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:02:55.519136 systemd[1]: Starting extend-filesystems.service... Oct 2 19:02:55.521442 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:02:55.524993 systemd[1]: Starting motdgen.service... Oct 2 19:02:55.530129 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:02:55.533841 systemd[1]: Starting prepare-critools.service... Oct 2 19:02:55.537757 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:02:55.542119 systemd[1]: Starting sshd-keygen.service... Oct 2 19:02:55.552972 systemd[1]: Starting systemd-logind.service... Oct 2 19:02:55.554534 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:02:55.554661 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:02:55.555502 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:02:55.556931 systemd[1]: Starting update-engine.service... Oct 2 19:02:55.560677 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:02:55.566122 systemd-networkd[1547]: eth0: Gained IPv6LL Oct 2 19:02:55.573223 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:02:55.576940 systemd[1]: Reached target network-online.target. Oct 2 19:02:55.581742 systemd[1]: Started amazon-ssm-agent.service. Oct 2 19:02:55.585870 systemd[1]: Started nvidia.service. Oct 2 19:02:55.649238 jq[1722]: false Oct 2 19:02:55.678012 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:02:55.678332 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:02:55.681979 tar[1733]: ./ Oct 2 19:02:55.681979 tar[1733]: ./macvlan Oct 2 19:02:55.707980 jq[1731]: true Oct 2 19:02:55.732807 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:02:55.733203 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:02:55.794462 dbus-daemon[1721]: [system] SELinux support is enabled Oct 2 19:02:55.795420 systemd[1]: Started dbus.service. Oct 2 19:02:55.800166 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:02:55.804222 tar[1736]: crictl Oct 2 19:02:55.800214 systemd[1]: Reached target system-config.target. Oct 2 19:02:55.802121 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:02:55.802154 systemd[1]: Reached target user-config.target. Oct 2 19:02:55.807441 jq[1748]: true Oct 2 19:02:55.835986 dbus-daemon[1721]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1547 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:02:55.843479 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:02:55.933691 update_engine[1730]: I1002 19:02:55.932845 1730 main.cc:92] Flatcar Update Engine starting Oct 2 19:02:55.939314 extend-filesystems[1723]: Found nvme0n1 Oct 2 19:02:55.941264 extend-filesystems[1723]: Found nvme0n1p1 Oct 2 19:02:55.941264 extend-filesystems[1723]: Found nvme0n1p2 Oct 2 19:02:55.941264 extend-filesystems[1723]: Found nvme0n1p3 Oct 2 19:02:55.941264 extend-filesystems[1723]: Found usr Oct 2 19:02:55.941264 extend-filesystems[1723]: Found nvme0n1p4 Oct 2 19:02:55.941264 extend-filesystems[1723]: Found nvme0n1p6 Oct 2 19:02:55.941264 extend-filesystems[1723]: Found nvme0n1p7 Oct 2 19:02:55.941264 extend-filesystems[1723]: Found nvme0n1p9 Oct 2 19:02:55.941264 extend-filesystems[1723]: Checking size of /dev/nvme0n1p9 Oct 2 19:02:55.975661 systemd[1]: Started update-engine.service. Oct 2 19:02:55.980461 systemd[1]: Started locksmithd.service. Oct 2 19:02:55.983483 update_engine[1730]: I1002 19:02:55.983445 1730 update_check_scheduler.cc:74] Next update check in 7m40s Oct 2 19:02:56.060591 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:02:56.061019 systemd[1]: Finished motdgen.service. Oct 2 19:02:56.100686 extend-filesystems[1723]: Resized partition /dev/nvme0n1p9 Oct 2 19:02:56.126084 amazon-ssm-agent[1734]: 2023/10/02 19:02:56 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 19:02:56.141482 extend-filesystems[1789]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:02:56.149878 amazon-ssm-agent[1734]: Initializing new seelog logger Oct 2 19:02:56.149878 amazon-ssm-agent[1734]: New Seelog Logger Creation Complete Oct 2 19:02:56.149878 amazon-ssm-agent[1734]: 2023/10/02 19:02:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:02:56.149878 amazon-ssm-agent[1734]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:02:56.149878 amazon-ssm-agent[1734]: 2023/10/02 19:02:56 processing appconfig overrides Oct 2 19:02:56.167932 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 19:02:56.190231 tar[1733]: ./static Oct 2 19:02:56.196092 systemd-logind[1729]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:02:56.205365 systemd-logind[1729]: New seat seat0. Oct 2 19:02:56.211829 systemd[1]: Started systemd-logind.service. Oct 2 19:02:56.223935 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 19:02:56.267023 extend-filesystems[1789]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 19:02:56.267023 extend-filesystems[1789]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 19:02:56.267023 extend-filesystems[1789]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 19:02:56.285165 extend-filesystems[1723]: Resized filesystem in /dev/nvme0n1p9 Oct 2 19:02:56.281637 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:02:56.287982 bash[1795]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:02:56.281990 systemd[1]: Finished extend-filesystems.service. Oct 2 19:02:56.289843 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:02:56.361802 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:02:56.404805 env[1745]: time="2023-10-02T19:02:56.403184692Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:02:56.451194 tar[1733]: ./vlan Oct 2 19:02:56.558677 dbus-daemon[1721]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:02:56.558913 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:02:56.562984 dbus-daemon[1721]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1757 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:02:56.567456 systemd[1]: Starting polkit.service... Oct 2 19:02:56.583359 env[1745]: time="2023-10-02T19:02:56.583280686Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:02:56.583568 env[1745]: time="2023-10-02T19:02:56.583525335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:02:56.590314 env[1745]: time="2023-10-02T19:02:56.590196648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:02:56.590314 env[1745]: time="2023-10-02T19:02:56.590262355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:02:56.590717 env[1745]: time="2023-10-02T19:02:56.590652144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:02:56.590717 env[1745]: time="2023-10-02T19:02:56.590700597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:02:56.590861 env[1745]: time="2023-10-02T19:02:56.590733763Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:02:56.590861 env[1745]: time="2023-10-02T19:02:56.590758134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:02:56.591008 env[1745]: time="2023-10-02T19:02:56.590951125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:02:56.591506 env[1745]: time="2023-10-02T19:02:56.591456416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:02:56.591810 env[1745]: time="2023-10-02T19:02:56.591723399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:02:56.591810 env[1745]: time="2023-10-02T19:02:56.591772709Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:02:56.591955 env[1745]: time="2023-10-02T19:02:56.591912432Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:02:56.591955 env[1745]: time="2023-10-02T19:02:56.591939580Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.606958182Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607028298Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607058687Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607137992Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607175497Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607206997Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607238809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607748949Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607824458Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607858943Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607921723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.607954889Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.608183024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:02:56.608924 env[1745]: time="2023-10-02T19:02:56.608348623Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:02:56.609669 env[1745]: time="2023-10-02T19:02:56.608844923Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:02:56.610009 env[1745]: time="2023-10-02T19:02:56.609955952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.610302 env[1745]: time="2023-10-02T19:02:56.610248776Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:02:56.610710 env[1745]: time="2023-10-02T19:02:56.610670955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.610909 env[1745]: time="2023-10-02T19:02:56.610862199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.611041 env[1745]: time="2023-10-02T19:02:56.611011400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.611185 env[1745]: time="2023-10-02T19:02:56.611156030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.611324 env[1745]: time="2023-10-02T19:02:56.611296552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.611478 env[1745]: time="2023-10-02T19:02:56.611448819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.611611 env[1745]: time="2023-10-02T19:02:56.611583532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.611747 env[1745]: time="2023-10-02T19:02:56.611719529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.611943 env[1745]: time="2023-10-02T19:02:56.611888889Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:02:56.612935 env[1745]: time="2023-10-02T19:02:56.612873862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.617264 env[1745]: time="2023-10-02T19:02:56.617192107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.617501 env[1745]: time="2023-10-02T19:02:56.617469783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.618269 env[1745]: time="2023-10-02T19:02:56.618210419Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:02:56.619005 env[1745]: time="2023-10-02T19:02:56.618940004Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:02:56.619174 env[1745]: time="2023-10-02T19:02:56.619145620Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:02:56.619520 env[1745]: time="2023-10-02T19:02:56.619487267Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:02:56.620688 env[1745]: time="2023-10-02T19:02:56.620651332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:02:56.621476 env[1745]: time="2023-10-02T19:02:56.621341259Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:02:56.627965 env[1745]: time="2023-10-02T19:02:56.627839867Z" level=info msg="Connect containerd service" Oct 2 19:02:56.630381 env[1745]: time="2023-10-02T19:02:56.630318136Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:02:56.632101 env[1745]: time="2023-10-02T19:02:56.632032681Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:02:56.639421 env[1745]: time="2023-10-02T19:02:56.639356334Z" level=info msg="Start subscribing containerd event" Oct 2 19:02:56.643052 env[1745]: time="2023-10-02T19:02:56.642992881Z" level=info msg="Start recovering state" Oct 2 19:02:56.643136 polkitd[1813]: Started polkitd version 121 Oct 2 19:02:56.643850 env[1745]: time="2023-10-02T19:02:56.643814546Z" level=info msg="Start event monitor" Oct 2 19:02:56.644148 env[1745]: time="2023-10-02T19:02:56.644118549Z" level=info msg="Start snapshots syncer" Oct 2 19:02:56.646009 env[1745]: time="2023-10-02T19:02:56.645957913Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:02:56.646983 env[1745]: time="2023-10-02T19:02:56.646949458Z" level=info msg="Start streaming server" Oct 2 19:02:56.648818 env[1745]: time="2023-10-02T19:02:56.648772123Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:02:56.649646 env[1745]: time="2023-10-02T19:02:56.649590699Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:02:56.650357 systemd[1]: Started containerd.service. Oct 2 19:02:56.654254 env[1745]: time="2023-10-02T19:02:56.654005260Z" level=info msg="containerd successfully booted in 0.310741s" Oct 2 19:02:56.696399 polkitd[1813]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:02:56.696519 polkitd[1813]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:02:56.712506 polkitd[1813]: Finished loading, compiling and executing 2 rules Oct 2 19:02:56.713300 dbus-daemon[1721]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:02:56.713557 systemd[1]: Started polkit.service. Oct 2 19:02:56.716936 polkitd[1813]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:02:56.741975 tar[1733]: ./portmap Oct 2 19:02:56.769255 systemd-hostnamed[1757]: Hostname set to (transient) Oct 2 19:02:56.769422 systemd-resolved[1690]: System hostname changed to 'ip-172-31-18-218'. Oct 2 19:02:56.872577 coreos-metadata[1720]: Oct 02 19:02:56.872 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:02:56.873893 coreos-metadata[1720]: Oct 02 19:02:56.873 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 19:02:56.874544 coreos-metadata[1720]: Oct 02 19:02:56.874 INFO Fetch successful Oct 2 19:02:56.875408 coreos-metadata[1720]: Oct 02 19:02:56.874 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:02:56.875923 coreos-metadata[1720]: Oct 02 19:02:56.875 INFO Fetch successful Oct 2 19:02:56.876212 tar[1733]: ./host-local Oct 2 19:02:56.878023 unknown[1720]: wrote ssh authorized keys file for user: core Oct 2 19:02:56.907478 update-ssh-keys[1851]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:02:56.908658 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:02:57.013639 tar[1733]: ./vrf Oct 2 19:02:57.026152 amazon-ssm-agent[1734]: 2023-10-02 19:02:57 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-0cdae4af3ad1b87c2 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-0cdae4af3ad1b87c2 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:02:57.026152 amazon-ssm-agent[1734]: status code: 400, request id: 696608ab-4051-4762-84ca-8520f79c17f7 Oct 2 19:02:57.026152 amazon-ssm-agent[1734]: 2023-10-02 19:02:57 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 19:02:57.115061 tar[1733]: ./bridge Oct 2 19:02:57.161716 systemd[1]: Finished prepare-critools.service. Oct 2 19:02:57.239447 tar[1733]: ./tuning Oct 2 19:02:57.345033 tar[1733]: ./firewall Oct 2 19:02:57.472252 tar[1733]: ./host-device Oct 2 19:02:57.530632 tar[1733]: ./sbr Oct 2 19:02:57.583035 tar[1733]: ./loopback Oct 2 19:02:57.632307 tar[1733]: ./dhcp Oct 2 19:02:57.771255 tar[1733]: ./ptp Oct 2 19:02:57.830935 tar[1733]: ./ipvlan Oct 2 19:02:57.890431 tar[1733]: ./bandwidth Oct 2 19:02:57.971967 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:02:58.065081 locksmithd[1770]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:02:58.708686 sshd_keygen[1756]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:02:58.765018 systemd[1]: Finished sshd-keygen.service. Oct 2 19:02:58.769967 systemd[1]: Starting issuegen.service... Oct 2 19:02:58.788355 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:02:58.788700 systemd[1]: Finished issuegen.service. Oct 2 19:02:58.793116 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:02:58.815176 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:02:58.820296 systemd[1]: Started getty@tty1.service. Oct 2 19:02:58.824599 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:02:58.826817 systemd[1]: Reached target getty.target. Oct 2 19:02:58.828652 systemd[1]: Reached target multi-user.target. Oct 2 19:02:58.833064 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:02:58.857453 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:02:58.857824 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:02:58.860121 systemd[1]: Startup finished in 1.186s (kernel) + 19.524s (initrd) + 11.734s (userspace) = 32.445s. Oct 2 19:03:05.080821 systemd[1]: Created slice system-sshd.slice. Oct 2 19:03:05.083195 systemd[1]: Started sshd@0-172.31.18.218:22-139.178.89.65:43876.service. Oct 2 19:03:05.268034 sshd[1930]: Accepted publickey for core from 139.178.89.65 port 43876 ssh2: RSA SHA256:oIXw9t+Qat2niYWbP5cZ6aL7amVj0yI65lfudwVnqrM Oct 2 19:03:05.273551 sshd[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:03:05.291020 systemd[1]: Created slice user-500.slice. Oct 2 19:03:05.293850 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:03:05.299066 systemd-logind[1729]: New session 1 of user core. Oct 2 19:03:05.321846 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:03:05.325231 systemd[1]: Starting user@500.service... Oct 2 19:03:05.336221 (systemd)[1933]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:03:05.541685 systemd[1933]: Queued start job for default target default.target. Oct 2 19:03:05.542734 systemd[1933]: Reached target paths.target. Oct 2 19:03:05.542786 systemd[1933]: Reached target sockets.target. Oct 2 19:03:05.542820 systemd[1933]: Reached target timers.target. Oct 2 19:03:05.542850 systemd[1933]: Reached target basic.target. Oct 2 19:03:05.542977 systemd[1933]: Reached target default.target. Oct 2 19:03:05.543042 systemd[1933]: Startup finished in 189ms. Oct 2 19:03:05.543970 systemd[1]: Started user@500.service. Oct 2 19:03:05.545801 systemd[1]: Started session-1.scope. Oct 2 19:03:05.706097 systemd[1]: Started sshd@1-172.31.18.218:22-139.178.89.65:43882.service. Oct 2 19:03:05.888445 sshd[1942]: Accepted publickey for core from 139.178.89.65 port 43882 ssh2: RSA SHA256:oIXw9t+Qat2niYWbP5cZ6aL7amVj0yI65lfudwVnqrM Oct 2 19:03:05.892208 sshd[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:03:05.900675 systemd-logind[1729]: New session 2 of user core. Oct 2 19:03:05.901607 systemd[1]: Started session-2.scope. Oct 2 19:03:06.051010 sshd[1942]: pam_unix(sshd:session): session closed for user core Oct 2 19:03:06.057123 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:03:06.058204 systemd[1]: sshd@1-172.31.18.218:22-139.178.89.65:43882.service: Deactivated successfully. Oct 2 19:03:06.060315 systemd-logind[1729]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:03:06.062416 systemd-logind[1729]: Removed session 2. Oct 2 19:03:06.081803 systemd[1]: Started sshd@2-172.31.18.218:22-139.178.89.65:53664.service. Oct 2 19:03:06.261845 sshd[1948]: Accepted publickey for core from 139.178.89.65 port 53664 ssh2: RSA SHA256:oIXw9t+Qat2niYWbP5cZ6aL7amVj0yI65lfudwVnqrM Oct 2 19:03:06.265086 sshd[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:03:06.274051 systemd[1]: Started session-3.scope. Oct 2 19:03:06.274791 systemd-logind[1729]: New session 3 of user core. Oct 2 19:03:06.405070 sshd[1948]: pam_unix(sshd:session): session closed for user core Oct 2 19:03:06.411512 systemd-logind[1729]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:03:06.412124 systemd[1]: sshd@2-172.31.18.218:22-139.178.89.65:53664.service: Deactivated successfully. Oct 2 19:03:06.413289 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:03:06.414832 systemd-logind[1729]: Removed session 3. Oct 2 19:03:06.436476 systemd[1]: Started sshd@3-172.31.18.218:22-139.178.89.65:53680.service. Oct 2 19:03:06.621077 sshd[1954]: Accepted publickey for core from 139.178.89.65 port 53680 ssh2: RSA SHA256:oIXw9t+Qat2niYWbP5cZ6aL7amVj0yI65lfudwVnqrM Oct 2 19:03:06.624235 sshd[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:03:06.633162 systemd-logind[1729]: New session 4 of user core. Oct 2 19:03:06.633222 systemd[1]: Started session-4.scope. Oct 2 19:03:06.780419 sshd[1954]: pam_unix(sshd:session): session closed for user core Oct 2 19:03:06.785589 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:03:06.786624 systemd[1]: sshd@3-172.31.18.218:22-139.178.89.65:53680.service: Deactivated successfully. Oct 2 19:03:06.788270 systemd-logind[1729]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:03:06.790134 systemd-logind[1729]: Removed session 4. Oct 2 19:03:06.808769 systemd[1]: Started sshd@4-172.31.18.218:22-139.178.89.65:53696.service. Oct 2 19:03:06.986080 sshd[1960]: Accepted publickey for core from 139.178.89.65 port 53696 ssh2: RSA SHA256:oIXw9t+Qat2niYWbP5cZ6aL7amVj0yI65lfudwVnqrM Oct 2 19:03:06.989237 sshd[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:03:06.997437 systemd-logind[1729]: New session 5 of user core. Oct 2 19:03:06.998294 systemd[1]: Started session-5.scope. Oct 2 19:03:07.134791 sudo[1963]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:03:07.135314 sudo[1963]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:03:07.155167 dbus-daemon[1721]: avc: received setenforce notice (enforcing=1) Oct 2 19:03:07.157872 sudo[1963]: pam_unix(sudo:session): session closed for user root Oct 2 19:03:07.181560 sshd[1960]: pam_unix(sshd:session): session closed for user core Oct 2 19:03:07.188419 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:03:07.189499 systemd[1]: sshd@4-172.31.18.218:22-139.178.89.65:53696.service: Deactivated successfully. Oct 2 19:03:07.191232 systemd-logind[1729]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:03:07.193001 systemd-logind[1729]: Removed session 5. Oct 2 19:03:07.214614 systemd[1]: Started sshd@5-172.31.18.218:22-139.178.89.65:53700.service. Oct 2 19:03:07.396109 sshd[1967]: Accepted publickey for core from 139.178.89.65 port 53700 ssh2: RSA SHA256:oIXw9t+Qat2niYWbP5cZ6aL7amVj0yI65lfudwVnqrM Oct 2 19:03:07.398800 sshd[1967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:03:07.407860 systemd[1]: Started session-6.scope. Oct 2 19:03:07.409029 systemd-logind[1729]: New session 6 of user core. Oct 2 19:03:07.528198 sudo[1971]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:03:07.528829 sudo[1971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:03:07.535771 sudo[1971]: pam_unix(sudo:session): session closed for user root Oct 2 19:03:07.549180 sudo[1970]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:03:07.550166 sudo[1970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:03:07.573075 systemd[1]: Stopping audit-rules.service... Oct 2 19:03:07.577000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:03:07.580532 kernel: kauditd_printk_skb: 79 callbacks suppressed Oct 2 19:03:07.580610 kernel: audit: type=1305 audit(1696273387.577:170): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:03:07.580837 auditctl[1974]: No rules Oct 2 19:03:07.586311 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:03:07.577000 audit[1974]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc62489e0 a2=420 a3=0 items=0 ppid=1 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:07.597695 kernel: audit: type=1300 audit(1696273387.577:170): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc62489e0 a2=420 a3=0 items=0 ppid=1 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:07.586664 systemd[1]: Stopped audit-rules.service. Oct 2 19:03:07.589684 systemd[1]: Starting audit-rules.service... Oct 2 19:03:07.577000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:03:07.602489 kernel: audit: type=1327 audit(1696273387.577:170): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:03:07.602581 kernel: audit: type=1131 audit(1696273387.585:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.653920 augenrules[1991]: No rules Oct 2 19:03:07.655759 systemd[1]: Finished audit-rules.service. Oct 2 19:03:07.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.666010 sudo[1970]: pam_unix(sudo:session): session closed for user root Oct 2 19:03:07.665000 audit[1970]: USER_END pid=1970 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.675428 kernel: audit: type=1130 audit(1696273387.655:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.675529 kernel: audit: type=1106 audit(1696273387.665:173): pid=1970 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.665000 audit[1970]: CRED_DISP pid=1970 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.684932 kernel: audit: type=1104 audit(1696273387.665:174): pid=1970 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.689506 sshd[1967]: pam_unix(sshd:session): session closed for user core Oct 2 19:03:07.690000 audit[1967]: USER_END pid=1967 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:07.695581 systemd-logind[1729]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:03:07.697070 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:03:07.698117 systemd[1]: sshd@5-172.31.18.218:22-139.178.89.65:53700.service: Deactivated successfully. Oct 2 19:03:07.700575 systemd-logind[1729]: Removed session 6. Oct 2 19:03:07.690000 audit[1967]: CRED_DISP pid=1967 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:07.714008 kernel: audit: type=1106 audit(1696273387.690:175): pid=1967 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:07.714109 kernel: audit: type=1104 audit(1696273387.690:176): pid=1967 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:07.714152 kernel: audit: type=1131 audit(1696273387.696:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.218:22-139.178.89.65:53700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.218:22-139.178.89.65:53700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.218:22-139.178.89.65:53714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:07.721156 systemd[1]: Started sshd@6-172.31.18.218:22-139.178.89.65:53714.service. Oct 2 19:03:07.903000 audit[1997]: USER_ACCT pid=1997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:07.904463 sshd[1997]: Accepted publickey for core from 139.178.89.65 port 53714 ssh2: RSA SHA256:oIXw9t+Qat2niYWbP5cZ6aL7amVj0yI65lfudwVnqrM Oct 2 19:03:07.906000 audit[1997]: CRED_ACQ pid=1997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:07.906000 audit[1997]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6928f40 a2=3 a3=1 items=0 ppid=1 pid=1997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:07.906000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:03:07.908808 sshd[1997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:03:07.917782 systemd[1]: Started session-7.scope. Oct 2 19:03:07.918685 systemd-logind[1729]: New session 7 of user core. Oct 2 19:03:07.926000 audit[1997]: USER_START pid=1997 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:07.933000 audit[1999]: CRED_ACQ pid=1999 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:08.037000 audit[2000]: USER_ACCT pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:08.038395 sudo[2000]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:03:08.038000 audit[2000]: CRED_REFR pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:08.039442 sudo[2000]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:03:08.042000 audit[2000]: USER_START pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:08.714752 systemd[1]: Reloading. Oct 2 19:03:08.895254 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2023-10-02T19:03:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:03:08.896457 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2023-10-02T19:03:08Z" level=info msg="torcx already run" Oct 2 19:03:09.128439 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:03:09.128690 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:03:09.171087 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:03:09.320000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.320000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.320000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.320000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.320000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.320000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.320000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.320000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.321000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.321000 audit: BPF prog-id=40 op=LOAD Oct 2 19:03:09.321000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.321000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.321000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.321000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.321000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.321000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.321000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.322000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.322000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.322000 audit: BPF prog-id=41 op=LOAD Oct 2 19:03:09.322000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:03:09.322000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:03:09.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.326000 audit: BPF prog-id=42 op=LOAD Oct 2 19:03:09.326000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:03:09.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.327000 audit: BPF prog-id=43 op=LOAD Oct 2 19:03:09.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.328000 audit: BPF prog-id=44 op=LOAD Oct 2 19:03:09.329000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:03:09.329000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:03:09.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.331000 audit: BPF prog-id=45 op=LOAD Oct 2 19:03:09.331000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:03:09.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit: BPF prog-id=46 op=LOAD Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.332000 audit: BPF prog-id=47 op=LOAD Oct 2 19:03:09.332000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:03:09.332000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:03:09.336000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.336000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.336000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.336000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.336000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.336000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.336000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.336000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.336000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.336000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.336000 audit: BPF prog-id=48 op=LOAD Oct 2 19:03:09.336000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit: BPF prog-id=49 op=LOAD Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.337000 audit: BPF prog-id=50 op=LOAD Oct 2 19:03:09.337000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:03:09.337000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:03:09.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.339000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.339000 audit: BPF prog-id=51 op=LOAD Oct 2 19:03:09.339000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:03:09.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.340000 audit: BPF prog-id=52 op=LOAD Oct 2 19:03:09.340000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit: BPF prog-id=53 op=LOAD Oct 2 19:03:09.344000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit: BPF prog-id=54 op=LOAD Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.345000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.345000 audit: BPF prog-id=55 op=LOAD Oct 2 19:03:09.345000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:03:09.345000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:03:09.346000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.346000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.346000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.346000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.346000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.346000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.346000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.346000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.346000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.346000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.346000 audit: BPF prog-id=56 op=LOAD Oct 2 19:03:09.346000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:03:09.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:09.347000 audit: BPF prog-id=57 op=LOAD Oct 2 19:03:09.347000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:03:09.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:09.391445 systemd[1]: Started kubelet.service. Oct 2 19:03:09.425659 systemd[1]: Starting coreos-metadata.service... Oct 2 19:03:09.576754 kubelet[2084]: E1002 19:03:09.576653 2084 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:03:09.580638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:03:09.581005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:03:09.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:03:09.620150 coreos-metadata[2087]: Oct 02 19:03:09.619 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:03:09.621085 coreos-metadata[2087]: Oct 02 19:03:09.620 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 19:03:09.621588 coreos-metadata[2087]: Oct 02 19:03:09.621 INFO Fetch successful Oct 2 19:03:09.622074 coreos-metadata[2087]: Oct 02 19:03:09.621 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 19:03:09.622423 coreos-metadata[2087]: Oct 02 19:03:09.622 INFO Fetch successful Oct 2 19:03:09.622767 coreos-metadata[2087]: Oct 02 19:03:09.622 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 19:03:09.623231 coreos-metadata[2087]: Oct 02 19:03:09.622 INFO Fetch successful Oct 2 19:03:09.623671 coreos-metadata[2087]: Oct 02 19:03:09.623 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 19:03:09.624151 coreos-metadata[2087]: Oct 02 19:03:09.623 INFO Fetch successful Oct 2 19:03:09.624589 coreos-metadata[2087]: Oct 02 19:03:09.624 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 19:03:09.625014 coreos-metadata[2087]: Oct 02 19:03:09.624 INFO Fetch successful Oct 2 19:03:09.625270 coreos-metadata[2087]: Oct 02 19:03:09.625 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 19:03:09.625521 coreos-metadata[2087]: Oct 02 19:03:09.625 INFO Fetch successful Oct 2 19:03:09.625815 coreos-metadata[2087]: Oct 02 19:03:09.625 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 19:03:09.626113 coreos-metadata[2087]: Oct 02 19:03:09.625 INFO Fetch successful Oct 2 19:03:09.626365 coreos-metadata[2087]: Oct 02 19:03:09.626 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 19:03:09.626617 coreos-metadata[2087]: Oct 02 19:03:09.626 INFO Fetch successful Oct 2 19:03:09.647612 systemd[1]: Finished coreos-metadata.service. Oct 2 19:03:09.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:10.221612 systemd[1]: Stopped kubelet.service. Oct 2 19:03:10.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:10.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:10.266347 systemd[1]: Reloading. Oct 2 19:03:10.430151 /usr/lib/systemd/system-generators/torcx-generator[2148]: time="2023-10-02T19:03:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:03:10.430211 /usr/lib/systemd/system-generators/torcx-generator[2148]: time="2023-10-02T19:03:10Z" level=info msg="torcx already run" Oct 2 19:03:10.686933 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:03:10.686976 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:03:10.729528 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:03:10.886000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.886000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.886000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.886000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.886000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.886000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.886000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.886000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.886000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.886000 audit: BPF prog-id=58 op=LOAD Oct 2 19:03:10.887000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.887000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.887000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.887000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.887000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.887000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.887000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.887000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.887000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.887000 audit: BPF prog-id=59 op=LOAD Oct 2 19:03:10.888000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:03:10.888000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:03:10.890000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.890000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.890000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.890000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.890000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.890000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.890000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.890000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.890000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.892000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.892000 audit: BPF prog-id=60 op=LOAD Oct 2 19:03:10.892000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:03:10.892000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.892000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.892000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.892000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.892000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.892000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.892000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.892000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.893000 audit: BPF prog-id=61 op=LOAD Oct 2 19:03:10.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.894000 audit: BPF prog-id=62 op=LOAD Oct 2 19:03:10.894000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:03:10.894000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:03:10.896000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.896000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.896000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.896000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.896000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.896000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.896000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.896000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.896000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.897000 audit: BPF prog-id=63 op=LOAD Oct 2 19:03:10.897000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:03:10.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit: BPF prog-id=64 op=LOAD Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.899000 audit: BPF prog-id=65 op=LOAD Oct 2 19:03:10.899000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:03:10.899000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:03:10.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.905000 audit: BPF prog-id=66 op=LOAD Oct 2 19:03:10.905000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:03:10.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit: BPF prog-id=67 op=LOAD Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.907000 audit: BPF prog-id=68 op=LOAD Oct 2 19:03:10.907000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:03:10.907000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:03:10.909000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.909000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.909000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.909000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.909000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.909000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.909000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.909000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.909000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.910000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.910000 audit: BPF prog-id=69 op=LOAD Oct 2 19:03:10.910000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:03:10.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.912000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.912000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.912000 audit: BPF prog-id=70 op=LOAD Oct 2 19:03:10.913000 audit: BPF prog-id=52 op=UNLOAD Oct 2 19:03:10.915000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.915000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.915000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.915000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.915000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.915000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.915000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.915000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.915000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit: BPF prog-id=71 op=LOAD Oct 2 19:03:10.916000 audit: BPF prog-id=53 op=UNLOAD Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit: BPF prog-id=72 op=LOAD Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.916000 audit: BPF prog-id=73 op=LOAD Oct 2 19:03:10.916000 audit: BPF prog-id=54 op=UNLOAD Oct 2 19:03:10.916000 audit: BPF prog-id=55 op=UNLOAD Oct 2 19:03:10.917000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.917000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.917000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.917000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.917000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.917000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.917000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.917000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.917000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.918000 audit: BPF prog-id=74 op=LOAD Oct 2 19:03:10.918000 audit: BPF prog-id=57 op=UNLOAD Oct 2 19:03:10.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.919000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.919000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.919000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.919000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.919000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:10.919000 audit: BPF prog-id=75 op=LOAD Oct 2 19:03:10.919000 audit: BPF prog-id=56 op=UNLOAD Oct 2 19:03:10.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:10.968615 systemd[1]: Started kubelet.service. Oct 2 19:03:11.103242 kubelet[2204]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:03:11.103242 kubelet[2204]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:03:11.103815 kubelet[2204]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:03:11.103815 kubelet[2204]: I1002 19:03:11.103428 2204 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:03:11.106249 kubelet[2204]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:03:11.106249 kubelet[2204]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:03:11.106249 kubelet[2204]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:03:11.867958 kubelet[2204]: I1002 19:03:11.867876 2204 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:03:11.868174 kubelet[2204]: I1002 19:03:11.868152 2204 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:03:11.868660 kubelet[2204]: I1002 19:03:11.868635 2204 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:03:11.874113 kubelet[2204]: I1002 19:03:11.874063 2204 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:03:11.877347 kubelet[2204]: W1002 19:03:11.877316 2204 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:03:11.880369 kubelet[2204]: I1002 19:03:11.880315 2204 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:03:11.881172 kubelet[2204]: I1002 19:03:11.881139 2204 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:03:11.881503 kubelet[2204]: I1002 19:03:11.881465 2204 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:03:11.881878 kubelet[2204]: I1002 19:03:11.881849 2204 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:03:11.882086 kubelet[2204]: I1002 19:03:11.882065 2204 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:03:11.882364 kubelet[2204]: I1002 19:03:11.882339 2204 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:03:11.893784 kubelet[2204]: I1002 19:03:11.893743 2204 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:03:11.893784 kubelet[2204]: I1002 19:03:11.893783 2204 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:03:11.894055 kubelet[2204]: I1002 19:03:11.893821 2204 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:03:11.894055 kubelet[2204]: I1002 19:03:11.893843 2204 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:03:11.895858 kubelet[2204]: E1002 19:03:11.895794 2204 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:11.896058 kubelet[2204]: E1002 19:03:11.895993 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:11.896764 kubelet[2204]: I1002 19:03:11.896725 2204 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:03:11.897801 kubelet[2204]: W1002 19:03:11.897760 2204 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:03:11.898937 kubelet[2204]: I1002 19:03:11.898859 2204 server.go:1175] "Started kubelet" Oct 2 19:03:11.900000 audit[2204]: AVC avc: denied { mac_admin } for pid=2204 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:11.900000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:03:11.900000 audit[2204]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e3d9b0 a1=400080f578 a2=4000e3d980 a3=25 items=0 ppid=1 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:11.900000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:03:11.900000 audit[2204]: AVC avc: denied { mac_admin } for pid=2204 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:11.900000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:03:11.900000 audit[2204]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d715e0 a1=400080f5a8 a2=4000e3da40 a3=25 items=0 ppid=1 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:11.900000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:03:11.902350 kubelet[2204]: I1002 19:03:11.901708 2204 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:03:11.902350 kubelet[2204]: I1002 19:03:11.901824 2204 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:03:11.902350 kubelet[2204]: I1002 19:03:11.902124 2204 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:03:11.906042 kubelet[2204]: E1002 19:03:11.905997 2204 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:03:11.906263 kubelet[2204]: E1002 19:03:11.906242 2204 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:03:11.909315 kubelet[2204]: I1002 19:03:11.909285 2204 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:03:11.910673 kubelet[2204]: I1002 19:03:11.910648 2204 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:03:11.917387 kubelet[2204]: I1002 19:03:11.917338 2204 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:03:11.918309 kubelet[2204]: I1002 19:03:11.918253 2204 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:03:11.920757 kubelet[2204]: W1002 19:03:11.920692 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.18.218" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:03:11.922228 kubelet[2204]: E1002 19:03:11.922196 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.18.218" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:03:11.922489 kubelet[2204]: W1002 19:03:11.922463 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:03:11.922617 kubelet[2204]: E1002 19:03:11.922597 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:03:11.923035 kubelet[2204]: E1002 19:03:11.922869 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9a61392be", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 898825406, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 898825406, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:11.925091 kubelet[2204]: E1002 19:03:11.925058 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:11.932291 kubelet[2204]: E1002 19:03:11.932252 2204 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.18.218" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:03:11.932607 kubelet[2204]: W1002 19:03:11.932582 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:03:11.932751 kubelet[2204]: E1002 19:03:11.932729 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:03:11.933068 kubelet[2204]: E1002 19:03:11.932957 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9a684737a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 906222970, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 906222970, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:11.976000 audit[2222]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:11.976000 audit[2222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff7f02380 a2=0 a3=1 items=0 ppid=2204 pid=2222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:11.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:03:11.983277 kubelet[2204]: I1002 19:03:11.983243 2204 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:03:11.983486 kubelet[2204]: I1002 19:03:11.983464 2204 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:03:11.983608 kubelet[2204]: I1002 19:03:11.983588 2204 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:03:11.983813 kubelet[2204]: E1002 19:03:11.983643 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06a593", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.218 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981864339, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981864339, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:11.983000 audit[2224]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:11.983000 audit[2224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffd36269c0 a2=0 a3=1 items=0 ppid=2204 pid=2224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:11.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:03:11.986382 kubelet[2204]: E1002 19:03:11.985189 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06f0ba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.218 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981883578, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981883578, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:11.986701 kubelet[2204]: E1002 19:03:11.986481 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab07054d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.218 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981888845, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981888845, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:11.986979 kubelet[2204]: I1002 19:03:11.986953 2204 policy_none.go:49] "None policy: Start" Oct 2 19:03:11.988308 kubelet[2204]: I1002 19:03:11.988211 2204 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:03:11.988308 kubelet[2204]: I1002 19:03:11.988301 2204 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:03:11.998672 systemd[1]: Created slice kubepods.slice. Oct 2 19:03:12.008533 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:03:12.015487 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:03:12.017849 kubelet[2204]: E1002 19:03:12.017799 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:12.018820 kubelet[2204]: I1002 19:03:12.018776 2204 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.218" Oct 2 19:03:12.020802 kubelet[2204]: E1002 19:03:12.020645 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06a593", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.218 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981864339, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 18727213, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06a593" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:12.021282 kubelet[2204]: E1002 19:03:12.021235 2204 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.218" Oct 2 19:03:12.022186 kubelet[2204]: E1002 19:03:12.022042 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06f0ba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.218 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981883578, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 18735040, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06f0ba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:12.024649 kubelet[2204]: I1002 19:03:12.024604 2204 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:03:12.023000 audit[2204]: AVC avc: denied { mac_admin } for pid=2204 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:12.023000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:03:12.023000 audit[2204]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001017170 a1=400100bfe0 a2=4001017140 a3=25 items=0 ppid=1 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.023000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:03:12.025193 kubelet[2204]: I1002 19:03:12.024781 2204 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:03:12.027360 kubelet[2204]: E1002 19:03:12.024488 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab07054d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.218 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981888845, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 18739820, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab07054d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:12.029467 kubelet[2204]: I1002 19:03:12.028504 2204 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:03:12.031616 kubelet[2204]: E1002 19:03:12.031579 2204 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.18.218\" not found" Oct 2 19:03:12.034962 kubelet[2204]: E1002 19:03:12.034793 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ae154e06", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 33156614, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 33156614, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:12.002000 audit[2226]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.002000 audit[2226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffea7bd260 a2=0 a3=1 items=0 ppid=2204 pid=2226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:03:12.051000 audit[2231]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.051000 audit[2231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff7269000 a2=0 a3=1 items=0 ppid=2204 pid=2231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.051000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:03:12.111000 audit[2236]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2236 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.111000 audit[2236]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd04e0f80 a2=0 a3=1 items=0 ppid=2204 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:03:12.116000 audit[2237]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.116000 audit[2237]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc3d236a0 a2=0 a3=1 items=0 ppid=2204 pid=2237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.116000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:03:12.121879 kubelet[2204]: E1002 19:03:12.117965 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:12.132000 audit[2240]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.132000 audit[2240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd7873b70 a2=0 a3=1 items=0 ppid=2204 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:03:12.134262 kubelet[2204]: E1002 19:03:12.134210 2204 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.18.218" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:03:12.146000 audit[2243]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.146000 audit[2243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc3593390 a2=0 a3=1 items=0 ppid=2204 pid=2243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:03:12.149000 audit[2244]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.149000 audit[2244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff7c58660 a2=0 a3=1 items=0 ppid=2204 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:03:12.153000 audit[2245]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2245 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.153000 audit[2245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8340b50 a2=0 a3=1 items=0 ppid=2204 pid=2245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:03:12.161000 audit[2247]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.161000 audit[2247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffffc040290 a2=0 a3=1 items=0 ppid=2204 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:03:12.170000 audit[2249]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.170000 audit[2249]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffee1c7230 a2=0 a3=1 items=0 ppid=2204 pid=2249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.170000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:03:12.207000 audit[2252]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.207000 audit[2252]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffd726def0 a2=0 a3=1 items=0 ppid=2204 pid=2252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:03:12.214000 audit[2254]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2254 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.214000 audit[2254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffc8511760 a2=0 a3=1 items=0 ppid=2204 pid=2254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:03:12.223308 kubelet[2204]: I1002 19:03:12.223223 2204 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.218" Oct 2 19:03:12.224136 kubelet[2204]: E1002 19:03:12.224075 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:12.224823 kubelet[2204]: E1002 19:03:12.224771 2204 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.218" Oct 2 19:03:12.225379 kubelet[2204]: E1002 19:03:12.225258 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06a593", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.218 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981864339, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 223171921, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06a593" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:12.226755 kubelet[2204]: E1002 19:03:12.226632 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06f0ba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.218 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981883578, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 223181218, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06f0ba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:12.234000 audit[2257]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.234000 audit[2257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffd6144560 a2=0 a3=1 items=0 ppid=2204 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:03:12.236284 kubelet[2204]: I1002 19:03:12.236255 2204 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:03:12.238000 audit[2259]: NETFILTER_CFG table=mangle:17 family=2 entries=1 op=nft_register_chain pid=2259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.238000 audit[2259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf47d5f0 a2=0 a3=1 items=0 ppid=2204 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:03:12.239000 audit[2258]: NETFILTER_CFG table=mangle:18 family=10 entries=2 op=nft_register_chain pid=2258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.239000 audit[2258]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffd32d490 a2=0 a3=1 items=0 ppid=2204 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:03:12.242000 audit[2261]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_chain pid=2261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.242000 audit[2261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc6635d0 a2=0 a3=1 items=0 ppid=2204 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:03:12.245000 audit[2262]: NETFILTER_CFG table=nat:20 family=10 entries=2 op=nft_register_chain pid=2262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.245000 audit[2262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc7599d80 a2=0 a3=1 items=0 ppid=2204 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:03:12.247000 audit[2263]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=2263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:12.247000 audit[2263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffed9b000 a2=0 a3=1 items=0 ppid=2204 pid=2263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:03:12.253000 audit[2265]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=2265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.253000 audit[2265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcd6b7a70 a2=0 a3=1 items=0 ppid=2204 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:03:12.257000 audit[2266]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=2266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.257000 audit[2266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffdef3f150 a2=0 a3=1 items=0 ppid=2204 pid=2266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.257000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:03:12.264000 audit[2268]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.264000 audit[2268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc4c4b230 a2=0 a3=1 items=0 ppid=2204 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:03:12.268000 audit[2269]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.268000 audit[2269]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffceb53f00 a2=0 a3=1 items=0 ppid=2204 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:03:12.272000 audit[2270]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.272000 audit[2270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5427bb0 a2=0 a3=1 items=0 ppid=2204 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:03:12.279000 audit[2272]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.279000 audit[2272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffde1bcff0 a2=0 a3=1 items=0 ppid=2204 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:03:12.286000 audit[2274]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.286000 audit[2274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe76ad4a0 a2=0 a3=1 items=0 ppid=2204 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:03:12.294000 audit[2276]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.294000 audit[2276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffd57df770 a2=0 a3=1 items=0 ppid=2204 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:03:12.302000 audit[2278]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.302000 audit[2278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffe8d55220 a2=0 a3=1 items=0 ppid=2204 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:03:12.311000 audit[2280]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.311000 audit[2280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffeef20780 a2=0 a3=1 items=0 ppid=2204 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:03:12.313504 kubelet[2204]: I1002 19:03:12.313479 2204 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:03:12.313665 kubelet[2204]: I1002 19:03:12.313646 2204 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:03:12.314118 kubelet[2204]: I1002 19:03:12.314094 2204 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:03:12.314390 kubelet[2204]: E1002 19:03:12.314369 2204 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:03:12.315794 kubelet[2204]: E1002 19:03:12.315673 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab07054d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.218 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981888845, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 223187013, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab07054d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:12.316429 kubelet[2204]: W1002 19:03:12.316401 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:03:12.316652 kubelet[2204]: E1002 19:03:12.316629 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:03:12.317000 audit[2281]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.317000 audit[2281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe7e96dc0 a2=0 a3=1 items=0 ppid=2204 pid=2281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:03:12.321000 audit[2282]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.321000 audit[2282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf54a2b0 a2=0 a3=1 items=0 ppid=2204 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.321000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:03:12.325019 kubelet[2204]: E1002 19:03:12.324984 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:12.324000 audit[2283]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:12.324000 audit[2283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7167dd0 a2=0 a3=1 items=0 ppid=2204 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:12.324000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:03:12.426275 kubelet[2204]: E1002 19:03:12.425601 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:12.526089 kubelet[2204]: E1002 19:03:12.526054 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:12.535614 kubelet[2204]: E1002 19:03:12.535584 2204 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.18.218" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:03:12.626069 kubelet[2204]: I1002 19:03:12.626026 2204 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.218" Oct 2 19:03:12.626357 kubelet[2204]: E1002 19:03:12.626316 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:12.627556 kubelet[2204]: E1002 19:03:12.627526 2204 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.218" Oct 2 19:03:12.628216 kubelet[2204]: E1002 19:03:12.628108 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06a593", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.218 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981864339, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 625979221, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06a593" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:12.715239 kubelet[2204]: E1002 19:03:12.715063 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06f0ba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.218 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981883578, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 625987753, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06f0ba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:12.727480 kubelet[2204]: E1002 19:03:12.727433 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:12.780070 kubelet[2204]: W1002 19:03:12.780022 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:03:12.780070 kubelet[2204]: E1002 19:03:12.780072 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:03:12.828458 kubelet[2204]: E1002 19:03:12.828416 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:12.883321 kubelet[2204]: W1002 19:03:12.883295 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:03:12.883502 kubelet[2204]: E1002 19:03:12.883481 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:03:12.896475 kubelet[2204]: E1002 19:03:12.896431 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:12.915084 kubelet[2204]: E1002 19:03:12.914975 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab07054d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.218 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981888845, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 12, 625993381, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab07054d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:12.928688 kubelet[2204]: E1002 19:03:12.928658 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:13.029303 kubelet[2204]: E1002 19:03:13.029270 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:13.129823 kubelet[2204]: E1002 19:03:13.129761 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:13.230850 kubelet[2204]: E1002 19:03:13.230822 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:13.331662 kubelet[2204]: E1002 19:03:13.331530 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:13.334035 kubelet[2204]: W1002 19:03:13.333984 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.18.218" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:03:13.334035 kubelet[2204]: E1002 19:03:13.334038 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.18.218" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:03:13.337248 kubelet[2204]: E1002 19:03:13.337218 2204 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.18.218" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:03:13.429622 kubelet[2204]: I1002 19:03:13.429592 2204 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.218" Oct 2 19:03:13.431191 kubelet[2204]: E1002 19:03:13.431143 2204 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.218" Oct 2 19:03:13.431305 kubelet[2204]: E1002 19:03:13.431210 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06a593", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.218 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981864339, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 13, 429535178, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06a593" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:13.431670 kubelet[2204]: E1002 19:03:13.431637 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:13.432732 kubelet[2204]: E1002 19:03:13.432640 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06f0ba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.218 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981883578, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 13, 429554977, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06f0ba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:13.435830 kubelet[2204]: W1002 19:03:13.435797 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:03:13.435954 kubelet[2204]: E1002 19:03:13.435840 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:03:13.515397 kubelet[2204]: E1002 19:03:13.515268 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab07054d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.218 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981888845, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 13, 429560488, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab07054d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:13.532635 kubelet[2204]: E1002 19:03:13.532598 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:13.633092 kubelet[2204]: E1002 19:03:13.632985 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:13.733599 kubelet[2204]: E1002 19:03:13.733548 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:13.834017 kubelet[2204]: E1002 19:03:13.833987 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:13.897390 kubelet[2204]: E1002 19:03:13.897305 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:13.934328 kubelet[2204]: E1002 19:03:13.934288 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.034799 kubelet[2204]: E1002 19:03:14.034772 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.135288 kubelet[2204]: E1002 19:03:14.135261 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.236338 kubelet[2204]: E1002 19:03:14.236312 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.337238 kubelet[2204]: E1002 19:03:14.337207 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.438029 kubelet[2204]: E1002 19:03:14.438004 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.538571 kubelet[2204]: E1002 19:03:14.538480 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.639292 kubelet[2204]: E1002 19:03:14.639239 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.739693 kubelet[2204]: E1002 19:03:14.739660 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.840300 kubelet[2204]: E1002 19:03:14.840215 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.897581 kubelet[2204]: E1002 19:03:14.897552 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:14.939322 kubelet[2204]: E1002 19:03:14.939267 2204 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.18.218" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:03:14.940344 kubelet[2204]: E1002 19:03:14.940310 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:14.995768 kubelet[2204]: W1002 19:03:14.995736 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:03:14.995873 kubelet[2204]: E1002 19:03:14.995780 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:03:15.033024 kubelet[2204]: I1002 19:03:15.032987 2204 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.218" Oct 2 19:03:15.034298 kubelet[2204]: E1002 19:03:15.034267 2204 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.218" Oct 2 19:03:15.034961 kubelet[2204]: E1002 19:03:15.034814 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06a593", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.218 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981864339, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 15, 32890597, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06a593" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:15.036396 kubelet[2204]: E1002 19:03:15.036299 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06f0ba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.218 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981883578, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 15, 32950129, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06f0ba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:15.037888 kubelet[2204]: E1002 19:03:15.037791 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab07054d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.218 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981888845, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 15, 32955681, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab07054d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:15.041024 kubelet[2204]: E1002 19:03:15.040988 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:15.141411 kubelet[2204]: E1002 19:03:15.141309 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:15.215664 kubelet[2204]: W1002 19:03:15.215610 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:03:15.215664 kubelet[2204]: E1002 19:03:15.215664 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:03:15.242150 kubelet[2204]: E1002 19:03:15.242115 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:15.343027 kubelet[2204]: E1002 19:03:15.342976 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:15.443507 kubelet[2204]: E1002 19:03:15.443384 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:15.543869 kubelet[2204]: E1002 19:03:15.543816 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:15.644308 kubelet[2204]: E1002 19:03:15.644265 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:15.651816 kubelet[2204]: W1002 19:03:15.651769 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.18.218" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:03:15.651816 kubelet[2204]: E1002 19:03:15.651817 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.18.218" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:03:15.745307 kubelet[2204]: E1002 19:03:15.745257 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:15.845623 kubelet[2204]: E1002 19:03:15.845579 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:15.898013 kubelet[2204]: E1002 19:03:15.897971 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:15.946763 kubelet[2204]: E1002 19:03:15.946719 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:16.047444 kubelet[2204]: E1002 19:03:16.047300 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:16.148012 kubelet[2204]: E1002 19:03:16.147962 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:16.239980 kubelet[2204]: W1002 19:03:16.239939 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:03:16.240131 kubelet[2204]: E1002 19:03:16.239987 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:03:16.248211 kubelet[2204]: E1002 19:03:16.248170 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:16.348389 kubelet[2204]: E1002 19:03:16.348282 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:16.449008 kubelet[2204]: E1002 19:03:16.448979 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:16.549528 kubelet[2204]: E1002 19:03:16.549491 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:16.650077 kubelet[2204]: E1002 19:03:16.649981 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:16.750420 kubelet[2204]: E1002 19:03:16.750393 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:16.850812 kubelet[2204]: E1002 19:03:16.850788 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:16.899167 kubelet[2204]: E1002 19:03:16.899134 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:16.951756 kubelet[2204]: E1002 19:03:16.951659 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:17.029923 kubelet[2204]: E1002 19:03:17.029824 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:17.052356 kubelet[2204]: E1002 19:03:17.052318 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:17.152824 kubelet[2204]: E1002 19:03:17.152785 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:17.253721 kubelet[2204]: E1002 19:03:17.253683 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:17.354670 kubelet[2204]: E1002 19:03:17.354641 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:17.455417 kubelet[2204]: E1002 19:03:17.455392 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:17.556040 kubelet[2204]: E1002 19:03:17.555915 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:17.656371 kubelet[2204]: E1002 19:03:17.656319 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:17.756732 kubelet[2204]: E1002 19:03:17.756699 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:17.857376 kubelet[2204]: E1002 19:03:17.857291 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:17.899633 kubelet[2204]: E1002 19:03:17.899605 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:17.957944 kubelet[2204]: E1002 19:03:17.957882 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:18.058387 kubelet[2204]: E1002 19:03:18.058355 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:18.141513 kubelet[2204]: E1002 19:03:18.141409 2204 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.18.218" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:03:18.159563 kubelet[2204]: E1002 19:03:18.159534 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:18.236440 kubelet[2204]: I1002 19:03:18.236384 2204 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.218" Oct 2 19:03:18.238187 kubelet[2204]: E1002 19:03:18.238157 2204 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.218" Oct 2 19:03:18.238353 kubelet[2204]: E1002 19:03:18.238096 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06a593", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.218 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981864339, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 18, 236317093, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06a593" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:18.240030 kubelet[2204]: E1002 19:03:18.239921 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab06f0ba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.218 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981883578, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 18, 236339694, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab06f0ba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:18.241545 kubelet[2204]: E1002 19:03:18.241455 2204 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.218.178a5fa9ab07054d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.218", UID:"172.31.18.218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.218 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.218"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 3, 11, 981888845, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 3, 18, 236352977, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.218.178a5fa9ab07054d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:03:18.260678 kubelet[2204]: E1002 19:03:18.260632 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:18.360993 kubelet[2204]: E1002 19:03:18.360941 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:18.461718 kubelet[2204]: E1002 19:03:18.461604 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:18.562076 kubelet[2204]: E1002 19:03:18.562041 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:18.662564 kubelet[2204]: E1002 19:03:18.662519 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:18.763025 kubelet[2204]: E1002 19:03:18.762977 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:18.863557 kubelet[2204]: E1002 19:03:18.863532 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:18.899862 kubelet[2204]: E1002 19:03:18.899834 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:18.963745 kubelet[2204]: E1002 19:03:18.963718 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:19.064531 kubelet[2204]: E1002 19:03:19.064425 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:19.165126 kubelet[2204]: E1002 19:03:19.165085 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:19.266079 kubelet[2204]: E1002 19:03:19.266051 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:19.330643 kubelet[2204]: W1002 19:03:19.330525 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:03:19.331024 kubelet[2204]: E1002 19:03:19.330818 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:03:19.367085 kubelet[2204]: E1002 19:03:19.367057 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:19.467830 kubelet[2204]: E1002 19:03:19.467807 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:19.568455 kubelet[2204]: E1002 19:03:19.568403 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:19.669014 kubelet[2204]: E1002 19:03:19.668918 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:19.769341 kubelet[2204]: E1002 19:03:19.769311 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:19.869786 kubelet[2204]: E1002 19:03:19.869740 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:19.900122 kubelet[2204]: E1002 19:03:19.900097 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:19.970426 kubelet[2204]: E1002 19:03:19.970312 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:20.071044 kubelet[2204]: E1002 19:03:20.070988 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:20.171633 kubelet[2204]: E1002 19:03:20.171600 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:20.172745 kubelet[2204]: W1002 19:03:20.172716 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:03:20.172950 kubelet[2204]: E1002 19:03:20.172921 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:03:20.272123 kubelet[2204]: E1002 19:03:20.272094 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:20.373229 kubelet[2204]: E1002 19:03:20.373180 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:20.416015 kubelet[2204]: W1002 19:03:20.415968 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:03:20.416015 kubelet[2204]: E1002 19:03:20.416014 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:03:20.473459 kubelet[2204]: E1002 19:03:20.473417 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:20.574044 kubelet[2204]: E1002 19:03:20.573920 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:20.674534 kubelet[2204]: E1002 19:03:20.674480 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:20.775077 kubelet[2204]: E1002 19:03:20.775051 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:20.875664 kubelet[2204]: E1002 19:03:20.875556 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:20.900768 kubelet[2204]: E1002 19:03:20.900739 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:20.975704 kubelet[2204]: E1002 19:03:20.975657 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:21.076379 kubelet[2204]: E1002 19:03:21.076326 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:21.177403 kubelet[2204]: E1002 19:03:21.177292 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:21.277841 kubelet[2204]: E1002 19:03:21.277789 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:21.378966 kubelet[2204]: E1002 19:03:21.378926 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:21.479771 kubelet[2204]: E1002 19:03:21.479675 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:21.580250 kubelet[2204]: E1002 19:03:21.580215 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:21.680805 kubelet[2204]: E1002 19:03:21.680755 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:21.735666 kubelet[2204]: W1002 19:03:21.735628 2204 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.18.218" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:03:21.735794 kubelet[2204]: E1002 19:03:21.735675 2204 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.18.218" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:03:21.781876 kubelet[2204]: E1002 19:03:21.781850 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:21.871208 kubelet[2204]: I1002 19:03:21.871183 2204 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:03:21.882986 kubelet[2204]: E1002 19:03:21.882942 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:21.901155 kubelet[2204]: E1002 19:03:21.901132 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:21.983744 kubelet[2204]: E1002 19:03:21.983695 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:22.030835 kubelet[2204]: E1002 19:03:22.030717 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:22.032337 kubelet[2204]: E1002 19:03:22.032290 2204 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.18.218\" not found" Oct 2 19:03:22.084408 kubelet[2204]: E1002 19:03:22.084379 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:22.185082 kubelet[2204]: E1002 19:03:22.185052 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:22.285229 kubelet[2204]: E1002 19:03:22.285120 2204 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.18.218" not found Oct 2 19:03:22.286174 kubelet[2204]: E1002 19:03:22.286141 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:22.386455 kubelet[2204]: E1002 19:03:22.386403 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:22.486971 kubelet[2204]: E1002 19:03:22.486946 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:22.588089 kubelet[2204]: E1002 19:03:22.587972 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:22.688792 kubelet[2204]: E1002 19:03:22.688742 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:22.789725 kubelet[2204]: E1002 19:03:22.789676 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:22.890417 kubelet[2204]: E1002 19:03:22.890326 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:22.901595 kubelet[2204]: E1002 19:03:22.901569 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:22.990796 kubelet[2204]: E1002 19:03:22.990762 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:23.091564 kubelet[2204]: E1002 19:03:23.091532 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:23.192138 kubelet[2204]: E1002 19:03:23.192026 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:23.292556 kubelet[2204]: E1002 19:03:23.292517 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:23.338970 kubelet[2204]: E1002 19:03:23.338939 2204 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.18.218" not found Oct 2 19:03:23.393683 kubelet[2204]: E1002 19:03:23.393655 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:23.494398 kubelet[2204]: E1002 19:03:23.494322 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:23.595246 kubelet[2204]: E1002 19:03:23.595212 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:23.695741 kubelet[2204]: E1002 19:03:23.695695 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:23.796663 kubelet[2204]: E1002 19:03:23.796543 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:23.897286 kubelet[2204]: E1002 19:03:23.897238 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:23.902458 kubelet[2204]: E1002 19:03:23.902433 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:23.998402 kubelet[2204]: E1002 19:03:23.998349 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:24.099130 kubelet[2204]: E1002 19:03:24.099032 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:24.200001 kubelet[2204]: E1002 19:03:24.199946 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:24.300601 kubelet[2204]: E1002 19:03:24.300573 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:24.401240 kubelet[2204]: E1002 19:03:24.401128 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:24.501774 kubelet[2204]: E1002 19:03:24.501737 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:24.547546 kubelet[2204]: E1002 19:03:24.547514 2204 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.18.218\" not found" node="172.31.18.218" Oct 2 19:03:24.602605 kubelet[2204]: E1002 19:03:24.602561 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:24.640317 kubelet[2204]: I1002 19:03:24.639706 2204 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.218" Oct 2 19:03:24.703620 kubelet[2204]: E1002 19:03:24.703496 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:24.739752 kubelet[2204]: I1002 19:03:24.739720 2204 kubelet_node_status.go:73] "Successfully registered node" node="172.31.18.218" Oct 2 19:03:24.804107 kubelet[2204]: E1002 19:03:24.804049 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:24.902861 kubelet[2204]: E1002 19:03:24.902831 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:24.905029 kubelet[2204]: E1002 19:03:24.905001 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:25.006180 kubelet[2204]: E1002 19:03:25.005988 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:25.006407 sudo[2000]: pam_unix(sudo:session): session closed for user root Oct 2 19:03:25.009171 kernel: kauditd_printk_skb: 540 callbacks suppressed Oct 2 19:03:25.009240 kernel: audit: type=1106 audit(1696273405.005:641): pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:25.005000 audit[2000]: USER_END pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:25.005000 audit[2000]: CRED_DISP pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:25.026327 kernel: audit: type=1104 audit(1696273405.005:642): pid=2000 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:03:25.031214 sshd[1997]: pam_unix(sshd:session): session closed for user core Oct 2 19:03:25.033000 audit[1997]: USER_END pid=1997 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:25.037821 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:03:25.039272 systemd[1]: sshd@6-172.31.18.218:22-139.178.89.65:53714.service: Deactivated successfully. Oct 2 19:03:25.047054 kernel: audit: type=1106 audit(1696273405.033:643): pid=1997 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:25.047143 kernel: audit: type=1104 audit(1696273405.034:644): pid=1997 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:25.034000 audit[1997]: CRED_DISP pid=1997 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:03:25.056441 systemd-logind[1729]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:03:25.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.218:22-139.178.89.65:53714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:25.066201 kernel: audit: type=1131 audit(1696273405.034:645): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.218:22-139.178.89.65:53714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:25.066729 systemd-logind[1729]: Removed session 7. Oct 2 19:03:25.106797 kubelet[2204]: E1002 19:03:25.106755 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:25.207412 kubelet[2204]: E1002 19:03:25.207378 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:25.309036 kubelet[2204]: E1002 19:03:25.308933 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:25.409680 kubelet[2204]: E1002 19:03:25.409626 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:25.510268 kubelet[2204]: E1002 19:03:25.510224 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:25.610895 kubelet[2204]: E1002 19:03:25.610789 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:25.711851 kubelet[2204]: E1002 19:03:25.711813 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:25.812413 kubelet[2204]: E1002 19:03:25.812386 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:25.904201 kubelet[2204]: E1002 19:03:25.904075 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:25.913475 kubelet[2204]: E1002 19:03:25.913427 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:26.014004 kubelet[2204]: E1002 19:03:26.013961 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:26.114723 kubelet[2204]: E1002 19:03:26.114696 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:26.215380 kubelet[2204]: E1002 19:03:26.215271 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:26.316020 kubelet[2204]: E1002 19:03:26.315985 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:26.417257 kubelet[2204]: E1002 19:03:26.417207 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:26.517790 kubelet[2204]: E1002 19:03:26.517741 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:26.618374 kubelet[2204]: E1002 19:03:26.618347 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:26.718999 kubelet[2204]: E1002 19:03:26.718976 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:26.802718 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:03:26.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:26.812941 kernel: audit: type=1131 audit(1696273406.802:646): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:03:26.819427 kubelet[2204]: E1002 19:03:26.819395 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:26.832000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:03:26.832000 audit: BPF prog-id=64 op=UNLOAD Oct 2 19:03:26.838350 kernel: audit: type=1334 audit(1696273406.832:647): prog-id=65 op=UNLOAD Oct 2 19:03:26.838421 kernel: audit: type=1334 audit(1696273406.832:648): prog-id=64 op=UNLOAD Oct 2 19:03:26.838497 kernel: audit: type=1334 audit(1696273406.832:649): prog-id=63 op=UNLOAD Oct 2 19:03:26.832000 audit: BPF prog-id=63 op=UNLOAD Oct 2 19:03:26.904568 kubelet[2204]: E1002 19:03:26.904533 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:26.920259 kubelet[2204]: E1002 19:03:26.920236 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:27.020966 kubelet[2204]: E1002 19:03:27.020938 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:27.032121 kubelet[2204]: E1002 19:03:27.032066 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:27.122661 kubelet[2204]: E1002 19:03:27.121986 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:27.223532 kubelet[2204]: E1002 19:03:27.223501 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:27.324298 kubelet[2204]: E1002 19:03:27.324251 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:27.424781 kubelet[2204]: E1002 19:03:27.424362 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:27.526012 kubelet[2204]: E1002 19:03:27.525982 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:27.626999 kubelet[2204]: E1002 19:03:27.626948 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:27.727924 kubelet[2204]: E1002 19:03:27.727513 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:27.828176 kubelet[2204]: E1002 19:03:27.828131 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:27.905354 kubelet[2204]: E1002 19:03:27.905297 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:27.929003 kubelet[2204]: E1002 19:03:27.928977 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:28.029620 kubelet[2204]: E1002 19:03:28.029576 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:28.130238 kubelet[2204]: E1002 19:03:28.130212 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:28.230734 kubelet[2204]: E1002 19:03:28.230687 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:28.331824 kubelet[2204]: E1002 19:03:28.331450 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:28.432492 kubelet[2204]: E1002 19:03:28.432440 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:28.533088 kubelet[2204]: E1002 19:03:28.533060 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:28.634133 kubelet[2204]: E1002 19:03:28.633727 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:28.734364 kubelet[2204]: E1002 19:03:28.734320 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:28.834972 kubelet[2204]: E1002 19:03:28.834947 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:28.905865 kubelet[2204]: E1002 19:03:28.905505 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:28.935659 kubelet[2204]: E1002 19:03:28.935634 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:29.036298 kubelet[2204]: E1002 19:03:29.036268 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:29.137077 kubelet[2204]: E1002 19:03:29.137050 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:29.237747 kubelet[2204]: E1002 19:03:29.237703 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:29.338519 kubelet[2204]: E1002 19:03:29.338489 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:29.439640 kubelet[2204]: E1002 19:03:29.439581 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:29.540610 kubelet[2204]: E1002 19:03:29.540207 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:29.641724 kubelet[2204]: E1002 19:03:29.641696 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:29.742391 kubelet[2204]: E1002 19:03:29.742345 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:29.843323 kubelet[2204]: E1002 19:03:29.842988 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:29.906478 kubelet[2204]: E1002 19:03:29.906422 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:29.944067 kubelet[2204]: E1002 19:03:29.944024 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:30.044696 kubelet[2204]: E1002 19:03:30.044670 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:30.145887 kubelet[2204]: E1002 19:03:30.145454 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:30.246090 kubelet[2204]: E1002 19:03:30.246046 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:30.346911 kubelet[2204]: E1002 19:03:30.346866 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:30.447531 kubelet[2204]: E1002 19:03:30.447128 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:30.548301 kubelet[2204]: E1002 19:03:30.548272 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:30.648989 kubelet[2204]: E1002 19:03:30.648963 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:30.749676 kubelet[2204]: E1002 19:03:30.749649 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:30.850339 kubelet[2204]: E1002 19:03:30.850314 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:30.906965 kubelet[2204]: E1002 19:03:30.906941 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:30.950675 kubelet[2204]: E1002 19:03:30.950650 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:31.051983 kubelet[2204]: E1002 19:03:31.051549 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:31.152984 kubelet[2204]: E1002 19:03:31.152953 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:31.253642 kubelet[2204]: E1002 19:03:31.253597 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:31.354338 kubelet[2204]: E1002 19:03:31.353931 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:31.454784 kubelet[2204]: E1002 19:03:31.454752 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:31.555647 kubelet[2204]: E1002 19:03:31.555615 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:31.656550 kubelet[2204]: E1002 19:03:31.656257 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:31.756621 kubelet[2204]: E1002 19:03:31.756570 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:31.857246 kubelet[2204]: E1002 19:03:31.857218 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:31.893921 kubelet[2204]: E1002 19:03:31.893879 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:31.907731 kubelet[2204]: E1002 19:03:31.907397 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:31.958135 kubelet[2204]: E1002 19:03:31.958090 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:32.032562 kubelet[2204]: E1002 19:03:32.032525 2204 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.18.218\" not found" Oct 2 19:03:32.033659 kubelet[2204]: E1002 19:03:32.033630 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:32.058212 kubelet[2204]: E1002 19:03:32.058181 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:32.159171 kubelet[2204]: E1002 19:03:32.158792 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:32.259429 kubelet[2204]: E1002 19:03:32.259385 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:32.359672 kubelet[2204]: E1002 19:03:32.359623 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:32.461384 kubelet[2204]: E1002 19:03:32.460430 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:32.561129 kubelet[2204]: E1002 19:03:32.561082 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:32.661735 kubelet[2204]: E1002 19:03:32.661709 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:32.762970 kubelet[2204]: E1002 19:03:32.762891 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:32.863372 kubelet[2204]: E1002 19:03:32.863331 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:32.907869 kubelet[2204]: E1002 19:03:32.907828 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:32.963480 kubelet[2204]: E1002 19:03:32.963434 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:33.064544 kubelet[2204]: E1002 19:03:33.064212 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:33.164796 kubelet[2204]: E1002 19:03:33.164741 2204 kubelet.go:2448] "Error getting node" err="node \"172.31.18.218\" not found" Oct 2 19:03:33.265181 kubelet[2204]: I1002 19:03:33.265130 2204 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:03:33.265754 env[1745]: time="2023-10-02T19:03:33.265700593Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:03:33.266250 kubelet[2204]: I1002 19:03:33.266053 2204 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:03:33.266571 kubelet[2204]: E1002 19:03:33.266546 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:33.907324 kubelet[2204]: I1002 19:03:33.907267 2204 apiserver.go:52] "Watching apiserver" Oct 2 19:03:33.908403 kubelet[2204]: E1002 19:03:33.908376 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:33.911406 kubelet[2204]: I1002 19:03:33.911370 2204 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:03:33.911668 kubelet[2204]: I1002 19:03:33.911647 2204 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:03:33.925330 systemd[1]: Created slice kubepods-besteffort-pode52f3db5_c159_4c0e_b050_fc5e0c6632c8.slice. Oct 2 19:03:33.951917 systemd[1]: Created slice kubepods-burstable-pod70ef4292_8675_4667_88cc_b5f4d4047034.slice. Oct 2 19:03:33.952471 kubelet[2204]: I1002 19:03:33.952440 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cni-path\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.952699 kubelet[2204]: I1002 19:03:33.952664 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e52f3db5-c159-4c0e-b050-fc5e0c6632c8-xtables-lock\") pod \"kube-proxy-sbr9d\" (UID: \"e52f3db5-c159-4c0e-b050-fc5e0c6632c8\") " pod="kube-system/kube-proxy-sbr9d" Oct 2 19:03:33.952872 kubelet[2204]: I1002 19:03:33.952839 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brtk9\" (UniqueName: \"kubernetes.io/projected/e52f3db5-c159-4c0e-b050-fc5e0c6632c8-kube-api-access-brtk9\") pod \"kube-proxy-sbr9d\" (UID: \"e52f3db5-c159-4c0e-b050-fc5e0c6632c8\") " pod="kube-system/kube-proxy-sbr9d" Oct 2 19:03:33.953040 kubelet[2204]: I1002 19:03:33.953019 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-run\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.953219 kubelet[2204]: I1002 19:03:33.953186 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-bpf-maps\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.953430 kubelet[2204]: I1002 19:03:33.953395 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e52f3db5-c159-4c0e-b050-fc5e0c6632c8-kube-proxy\") pod \"kube-proxy-sbr9d\" (UID: \"e52f3db5-c159-4c0e-b050-fc5e0c6632c8\") " pod="kube-system/kube-proxy-sbr9d" Oct 2 19:03:33.953580 kubelet[2204]: I1002 19:03:33.953560 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-xtables-lock\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.953773 kubelet[2204]: I1002 19:03:33.953740 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70ef4292-8675-4667-88cc-b5f4d4047034-clustermesh-secrets\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.953962 kubelet[2204]: I1002 19:03:33.953939 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-config-path\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.954111 kubelet[2204]: I1002 19:03:33.954091 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-host-proc-sys-kernel\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.954292 kubelet[2204]: I1002 19:03:33.954271 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvxz\" (UniqueName: \"kubernetes.io/projected/70ef4292-8675-4667-88cc-b5f4d4047034-kube-api-access-ckvxz\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.954454 kubelet[2204]: I1002 19:03:33.954433 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-lib-modules\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.954630 kubelet[2204]: I1002 19:03:33.954597 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-hostproc\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.954802 kubelet[2204]: I1002 19:03:33.954771 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-cgroup\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.954974 kubelet[2204]: I1002 19:03:33.954953 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-etc-cni-netd\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.955166 kubelet[2204]: I1002 19:03:33.955131 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-host-proc-sys-net\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.955331 kubelet[2204]: I1002 19:03:33.955298 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70ef4292-8675-4667-88cc-b5f4d4047034-hubble-tls\") pod \"cilium-t6jsn\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " pod="kube-system/cilium-t6jsn" Oct 2 19:03:33.955483 kubelet[2204]: I1002 19:03:33.955463 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e52f3db5-c159-4c0e-b050-fc5e0c6632c8-lib-modules\") pod \"kube-proxy-sbr9d\" (UID: \"e52f3db5-c159-4c0e-b050-fc5e0c6632c8\") " pod="kube-system/kube-proxy-sbr9d" Oct 2 19:03:33.955616 kubelet[2204]: I1002 19:03:33.955595 2204 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:03:34.249149 env[1745]: time="2023-10-02T19:03:34.248595319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sbr9d,Uid:e52f3db5-c159-4c0e-b050-fc5e0c6632c8,Namespace:kube-system,Attempt:0,}" Oct 2 19:03:34.566071 env[1745]: time="2023-10-02T19:03:34.565936705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t6jsn,Uid:70ef4292-8675-4667-88cc-b5f4d4047034,Namespace:kube-system,Attempt:0,}" Oct 2 19:03:34.851141 env[1745]: time="2023-10-02T19:03:34.851003622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:34.853054 env[1745]: time="2023-10-02T19:03:34.853005279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:34.857474 env[1745]: time="2023-10-02T19:03:34.857424733Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:34.859666 env[1745]: time="2023-10-02T19:03:34.859608138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:34.862610 env[1745]: time="2023-10-02T19:03:34.862564279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:34.865566 env[1745]: time="2023-10-02T19:03:34.865498225Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:34.870481 env[1745]: time="2023-10-02T19:03:34.870322529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:34.876323 env[1745]: time="2023-10-02T19:03:34.876252590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:34.909549 kubelet[2204]: E1002 19:03:34.909502 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:34.918616 env[1745]: time="2023-10-02T19:03:34.917723944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:03:34.918756 env[1745]: time="2023-10-02T19:03:34.918637888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:03:34.918756 env[1745]: time="2023-10-02T19:03:34.918723417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:03:34.919655 env[1745]: time="2023-10-02T19:03:34.919512865Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef pid=2302 runtime=io.containerd.runc.v2 Oct 2 19:03:34.935202 env[1745]: time="2023-10-02T19:03:34.935074210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:03:34.935378 env[1745]: time="2023-10-02T19:03:34.935150609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:03:34.935378 env[1745]: time="2023-10-02T19:03:34.935210452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:03:34.935654 env[1745]: time="2023-10-02T19:03:34.935576934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a8d1c30c48416fb724d6e8efd2b6bc2f95eae70262180d78e55f5856bc1a529 pid=2318 runtime=io.containerd.runc.v2 Oct 2 19:03:34.967445 systemd[1]: Started cri-containerd-7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef.scope. Oct 2 19:03:35.003152 systemd[1]: Started cri-containerd-4a8d1c30c48416fb724d6e8efd2b6bc2f95eae70262180d78e55f5856bc1a529.scope. Oct 2 19:03:35.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.059059 kernel: audit: type=1400 audit(1696273415.042:650): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.059163 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:03:35.059234 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:03:35.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.067079 kernel: audit: type=1400 audit(1696273415.042:651): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.068806 kernel: audit: backlog limit exceeded Oct 2 19:03:35.071244 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:03:35.074031 kernel: audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:03:35.075717 kernel: audit: backlog limit exceeded Oct 2 19:03:35.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.085634 kernel: audit: type=1400 audit(1696273415.042:652): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.085697 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:03:35.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.042000 audit: BPF prog-id=76 op=LOAD Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2318 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:35.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386431633330633438343136666237323464366538656664326236 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2318 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:35.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386431633330633438343136666237323464366538656664326236 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit: BPF prog-id=77 op=LOAD Oct 2 19:03:35.051000 audit[2329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2318 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:35.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386431633330633438343136666237323464366538656664326236 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit: BPF prog-id=78 op=LOAD Oct 2 19:03:35.051000 audit[2329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2318 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:35.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386431633330633438343136666237323464366538656664326236 Oct 2 19:03:35.051000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:03:35.051000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.051000 audit: BPF prog-id=79 op=LOAD Oct 2 19:03:35.051000 audit[2329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2318 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:35.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386431633330633438343136666237323464366538656664326236 Oct 2 19:03:35.053000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.053000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.053000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.053000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.068000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.080000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.084000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.085000 audit: BPF prog-id=80 op=LOAD Oct 2 19:03:35.086000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.086000 audit[2321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400011db38 a2=10 a3=0 items=0 ppid=2302 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:35.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733343466343030616436643834393531326336636265356334636530 Oct 2 19:03:35.086000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.086000 audit[2321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400011d5a0 a2=3c a3=0 items=0 ppid=2302 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:35.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733343466343030616436643834393531326336636265356334636530 Oct 2 19:03:35.091000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.091000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.091000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.091000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.091000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.091000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.091000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.091000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.091000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.091000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.091000 audit: BPF prog-id=81 op=LOAD Oct 2 19:03:35.091000 audit[2321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011d8e0 a2=78 a3=0 items=0 ppid=2302 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:35.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733343466343030616436643834393531326336636265356334636530 Oct 2 19:03:35.092000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.092000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.092000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.092000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.092000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.092000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.092000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.092000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.092000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.092000 audit: BPF prog-id=82 op=LOAD Oct 2 19:03:35.092000 audit[2321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400011d670 a2=78 a3=0 items=0 ppid=2302 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:35.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733343466343030616436643834393531326336636265356334636530 Oct 2 19:03:35.093000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:03:35.094000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:03:35.094000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.094000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.094000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.095177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774562934.mount: Deactivated successfully. Oct 2 19:03:35.094000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.094000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.094000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.094000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.094000 audit[2321]: AVC avc: denied { perfmon } for pid=2321 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.094000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.094000 audit[2321]: AVC avc: denied { bpf } for pid=2321 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:35.094000 audit: BPF prog-id=83 op=LOAD Oct 2 19:03:35.094000 audit[2321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011db40 a2=78 a3=0 items=0 ppid=2302 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:35.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733343466343030616436643834393531326336636265356334636530 Oct 2 19:03:35.130123 env[1745]: time="2023-10-02T19:03:35.129827930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sbr9d,Uid:e52f3db5-c159-4c0e-b050-fc5e0c6632c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a8d1c30c48416fb724d6e8efd2b6bc2f95eae70262180d78e55f5856bc1a529\"" Oct 2 19:03:35.137774 env[1745]: time="2023-10-02T19:03:35.137713343Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:03:35.145156 env[1745]: time="2023-10-02T19:03:35.145095730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t6jsn,Uid:70ef4292-8675-4667-88cc-b5f4d4047034,Namespace:kube-system,Attempt:0,} returns sandbox id \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\"" Oct 2 19:03:35.910429 kubelet[2204]: E1002 19:03:35.910364 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:36.391854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307050580.mount: Deactivated successfully. Oct 2 19:03:36.910629 kubelet[2204]: E1002 19:03:36.910491 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:37.017245 env[1745]: time="2023-10-02T19:03:37.017165417Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:37.020317 env[1745]: time="2023-10-02T19:03:37.020256971Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:37.023199 env[1745]: time="2023-10-02T19:03:37.023148323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:37.025488 env[1745]: time="2023-10-02T19:03:37.025443629Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:37.026377 env[1745]: time="2023-10-02T19:03:37.026329072Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e\"" Oct 2 19:03:37.028352 env[1745]: time="2023-10-02T19:03:37.028286364Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:03:37.030479 env[1745]: time="2023-10-02T19:03:37.030424924Z" level=info msg="CreateContainer within sandbox \"4a8d1c30c48416fb724d6e8efd2b6bc2f95eae70262180d78e55f5856bc1a529\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:03:37.035310 kubelet[2204]: E1002 19:03:37.035263 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:37.059374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740698169.mount: Deactivated successfully. Oct 2 19:03:37.069814 env[1745]: time="2023-10-02T19:03:37.069753524Z" level=info msg="CreateContainer within sandbox \"4a8d1c30c48416fb724d6e8efd2b6bc2f95eae70262180d78e55f5856bc1a529\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9b367ba65c87556619ba8bd77ae27a41169895410175b69c86072de4273471bc\"" Oct 2 19:03:37.071279 env[1745]: time="2023-10-02T19:03:37.071209543Z" level=info msg="StartContainer for \"9b367ba65c87556619ba8bd77ae27a41169895410175b69c86072de4273471bc\"" Oct 2 19:03:37.120962 systemd[1]: Started cri-containerd-9b367ba65c87556619ba8bd77ae27a41169895410175b69c86072de4273471bc.scope. Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit[2386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2318 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333637626136356338373535363631396261386264373761653237 Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.165000 audit: BPF prog-id=84 op=LOAD Oct 2 19:03:37.165000 audit[2386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2318 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333637626136356338373535363631396261386264373761653237 Oct 2 19:03:37.166000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.166000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.166000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.166000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.166000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.166000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.166000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.166000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.166000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.166000 audit: BPF prog-id=85 op=LOAD Oct 2 19:03:37.166000 audit[2386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2318 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333637626136356338373535363631396261386264373761653237 Oct 2 19:03:37.167000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:03:37.167000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:03:37.167000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.167000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.167000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.167000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.167000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.167000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.167000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.167000 audit[2386]: AVC avc: denied { perfmon } for pid=2386 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.167000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.167000 audit[2386]: AVC avc: denied { bpf } for pid=2386 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:03:37.167000 audit: BPF prog-id=86 op=LOAD Oct 2 19:03:37.167000 audit[2386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2318 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962333637626136356338373535363631396261386264373761653237 Oct 2 19:03:37.204798 env[1745]: time="2023-10-02T19:03:37.204692875Z" level=info msg="StartContainer for \"9b367ba65c87556619ba8bd77ae27a41169895410175b69c86072de4273471bc\" returns successfully" Oct 2 19:03:37.290719 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:03:37.290932 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:03:37.292201 kernel: IPVS: ipvs loaded. Oct 2 19:03:37.313944 kernel: IPVS: [rr] scheduler registered. Oct 2 19:03:37.330947 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:03:37.345973 kernel: IPVS: [sh] scheduler registered. Oct 2 19:03:37.455000 audit[2443]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.455000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff08b9100 a2=0 a3=ffff935756c0 items=0 ppid=2396 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:03:37.460000 audit[2444]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.460000 audit[2444]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffecf108e0 a2=0 a3=ffff8ef476c0 items=0 ppid=2396 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:03:37.467000 audit[2445]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.467000 audit[2445]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7ab3d20 a2=0 a3=ffff9ddff6c0 items=0 ppid=2396 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:03:37.469000 audit[2446]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.469000 audit[2446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe45949f0 a2=0 a3=ffff9b0726c0 items=0 ppid=2396 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:03:37.472000 audit[2447]: NETFILTER_CFG table=filter:39 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.472000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd016f950 a2=0 a3=ffff986846c0 items=0 ppid=2396 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:03:37.474000 audit[2448]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.474000 audit[2448]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffcaa6e40 a2=0 a3=ffffbd9e76c0 items=0 ppid=2396 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:03:37.564000 audit[2449]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.564000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdc74c4a0 a2=0 a3=ffff9eb4d6c0 items=0 ppid=2396 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:03:37.572000 audit[2451]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.572000 audit[2451]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc9f92330 a2=0 a3=ffff9950d6c0 items=0 ppid=2396 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:03:37.584000 audit[2454]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.584000 audit[2454]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd86e91c0 a2=0 a3=ffffab5576c0 items=0 ppid=2396 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:03:37.588000 audit[2455]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.588000 audit[2455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef0ccbb0 a2=0 a3=ffff99b0c6c0 items=0 ppid=2396 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:03:37.596000 audit[2457]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.596000 audit[2457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcd77eee0 a2=0 a3=ffffb3e7c6c0 items=0 ppid=2396 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:03:37.600000 audit[2458]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.600000 audit[2458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1a15f30 a2=0 a3=ffffacca86c0 items=0 ppid=2396 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.600000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:03:37.609000 audit[2460]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.609000 audit[2460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd472ff70 a2=0 a3=ffffa20576c0 items=0 ppid=2396 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.609000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:03:37.620000 audit[2463]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.620000 audit[2463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdbaad8d0 a2=0 a3=ffff9dfcc6c0 items=0 ppid=2396 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:03:37.624000 audit[2464]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.624000 audit[2464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1790770 a2=0 a3=ffffa8a1b6c0 items=0 ppid=2396 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.624000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:03:37.633000 audit[2466]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.633000 audit[2466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1b27700 a2=0 a3=ffff836e96c0 items=0 ppid=2396 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:03:37.638000 audit[2467]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.638000 audit[2467]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe96a0370 a2=0 a3=ffffbdc896c0 items=0 ppid=2396 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.638000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:03:37.647000 audit[2469]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.647000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc12d4d00 a2=0 a3=ffff8fbbb6c0 items=0 ppid=2396 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:03:37.659000 audit[2472]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.659000 audit[2472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc84c1500 a2=0 a3=ffff8c9996c0 items=0 ppid=2396 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:03:37.671000 audit[2475]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.671000 audit[2475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffff074220 a2=0 a3=ffff99c846c0 items=0 ppid=2396 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:03:37.674000 audit[2476]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.674000 audit[2476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffbab8540 a2=0 a3=ffff80c216c0 items=0 ppid=2396 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.674000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:03:37.682000 audit[2478]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.682000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffffc578870 a2=0 a3=ffff9c2f76c0 items=0 ppid=2396 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:03:37.693000 audit[2481]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:03:37.693000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffffbba0890 a2=0 a3=ffffa1d516c0 items=0 ppid=2396 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:03:37.720000 audit[2485]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:03:37.720000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffffaea410 a2=0 a3=ffff9d2b56c0 items=0 ppid=2396 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.720000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:03:37.738000 audit[2485]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:03:37.738000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffffaea410 a2=0 a3=ffff9d2b56c0 items=0 ppid=2396 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:03:37.746000 audit[2489]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.746000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff102f370 a2=0 a3=ffffb9cfd6c0 items=0 ppid=2396 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.746000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:03:37.757000 audit[2491]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.757000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffeabe44b0 a2=0 a3=ffff8e6df6c0 items=0 ppid=2396 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.757000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:03:37.770000 audit[2494]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.770000 audit[2494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff8b9c000 a2=0 a3=ffffb36576c0 items=0 ppid=2396 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:03:37.774000 audit[2495]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.774000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffabbead0 a2=0 a3=ffffb667f6c0 items=0 ppid=2396 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.774000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:03:37.783000 audit[2497]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.783000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffb70ea70 a2=0 a3=ffff8bbd16c0 items=0 ppid=2396 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:03:37.788000 audit[2498]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.788000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1cd6af0 a2=0 a3=ffff95f5f6c0 items=0 ppid=2396 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:03:37.798000 audit[2500]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.798000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd6b77a50 a2=0 a3=ffff8531b6c0 items=0 ppid=2396 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:03:37.810000 audit[2503]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.810000 audit[2503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdefae2d0 a2=0 a3=ffff8527f6c0 items=0 ppid=2396 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:03:37.814000 audit[2504]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.814000 audit[2504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde817c20 a2=0 a3=ffff8ec8b6c0 items=0 ppid=2396 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:03:37.822000 audit[2506]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.822000 audit[2506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe1961530 a2=0 a3=ffffb83ae6c0 items=0 ppid=2396 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.822000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:03:37.826000 audit[2507]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.826000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc286a0c0 a2=0 a3=ffffa8a0a6c0 items=0 ppid=2396 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.826000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:03:37.834000 audit[2509]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.834000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffda048df0 a2=0 a3=ffffb6a1f6c0 items=0 ppid=2396 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.834000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:03:37.846000 audit[2512]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.846000 audit[2512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd9c418b0 a2=0 a3=ffff863666c0 items=0 ppid=2396 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.846000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:03:37.860000 audit[2515]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.860000 audit[2515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff92682a0 a2=0 a3=ffff961d46c0 items=0 ppid=2396 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:03:37.864000 audit[2516]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.864000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe920f5d0 a2=0 a3=ffff80f786c0 items=0 ppid=2396 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:03:37.874000 audit[2518]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.874000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd750dad0 a2=0 a3=ffffa9d326c0 items=0 ppid=2396 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:03:37.885000 audit[2521]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:03:37.885000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd93da9a0 a2=0 a3=ffff844486c0 items=0 ppid=2396 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:03:37.902000 audit[2525]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:03:37.902000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffc497b280 a2=0 a3=ffffb98806c0 items=0 ppid=2396 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.902000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:03:37.903000 audit[2525]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:03:37.903000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1860 a0=3 a1=ffffc497b280 a2=0 a3=ffffb98806c0 items=0 ppid=2396 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:03:37.903000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:03:37.911658 kubelet[2204]: E1002 19:03:37.911600 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:38.912166 kubelet[2204]: E1002 19:03:38.912099 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:39.912981 kubelet[2204]: E1002 19:03:39.912938 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:40.913915 kubelet[2204]: E1002 19:03:40.913852 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:41.420606 update_engine[1730]: I1002 19:03:41.419265 1730 update_attempter.cc:505] Updating boot flags... Oct 2 19:03:41.914033 kubelet[2204]: E1002 19:03:41.913978 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:42.037650 kubelet[2204]: E1002 19:03:42.037597 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:42.914854 kubelet[2204]: E1002 19:03:42.914762 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:43.915622 kubelet[2204]: E1002 19:03:43.915556 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:44.229406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399662764.mount: Deactivated successfully. Oct 2 19:03:44.916354 kubelet[2204]: E1002 19:03:44.916291 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:45.917061 kubelet[2204]: E1002 19:03:45.916972 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:46.918115 kubelet[2204]: E1002 19:03:46.918046 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:47.039293 kubelet[2204]: E1002 19:03:47.039242 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:47.918382 kubelet[2204]: E1002 19:03:47.918316 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:48.147013 env[1745]: time="2023-10-02T19:03:48.146930358Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:48.150052 env[1745]: time="2023-10-02T19:03:48.149991119Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:48.153054 env[1745]: time="2023-10-02T19:03:48.152971735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:03:48.154555 env[1745]: time="2023-10-02T19:03:48.154504282Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db\"" Oct 2 19:03:48.161250 env[1745]: time="2023-10-02T19:03:48.161183824Z" level=info msg="CreateContainer within sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:03:48.184090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3465321894.mount: Deactivated successfully. Oct 2 19:03:48.193940 env[1745]: time="2023-10-02T19:03:48.193849417Z" level=info msg="CreateContainer within sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\"" Oct 2 19:03:48.194734 env[1745]: time="2023-10-02T19:03:48.194675793Z" level=info msg="StartContainer for \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\"" Oct 2 19:03:48.241775 systemd[1]: Started cri-containerd-1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f.scope. Oct 2 19:03:48.283886 systemd[1]: cri-containerd-1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f.scope: Deactivated successfully. Oct 2 19:03:48.919168 kubelet[2204]: E1002 19:03:48.919102 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:49.176568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f-rootfs.mount: Deactivated successfully. Oct 2 19:03:49.722998 env[1745]: time="2023-10-02T19:03:49.722890200Z" level=info msg="shim disconnected" id=1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f Oct 2 19:03:49.723562 env[1745]: time="2023-10-02T19:03:49.722997737Z" level=warning msg="cleaning up after shim disconnected" id=1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f namespace=k8s.io Oct 2 19:03:49.723562 env[1745]: time="2023-10-02T19:03:49.723022611Z" level=info msg="cleaning up dead shim" Oct 2 19:03:49.749133 env[1745]: time="2023-10-02T19:03:49.749060062Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:03:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2734 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:03:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:03:49.749668 env[1745]: time="2023-10-02T19:03:49.749510318Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 19:03:49.750059 env[1745]: time="2023-10-02T19:03:49.749994402Z" level=error msg="Failed to pipe stdout of container \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\"" error="reading from a closed fifo" Oct 2 19:03:49.750305 env[1745]: time="2023-10-02T19:03:49.750237050Z" level=error msg="Failed to pipe stderr of container \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\"" error="reading from a closed fifo" Oct 2 19:03:49.752778 env[1745]: time="2023-10-02T19:03:49.752695392Z" level=error msg="StartContainer for \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:03:49.753270 kubelet[2204]: E1002 19:03:49.753235 2204 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f" Oct 2 19:03:49.753420 kubelet[2204]: E1002 19:03:49.753389 2204 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:03:49.753420 kubelet[2204]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:03:49.753420 kubelet[2204]: rm /hostbin/cilium-mount Oct 2 19:03:49.753420 kubelet[2204]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckvxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:03:49.753755 kubelet[2204]: E1002 19:03:49.753465 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:03:49.920010 kubelet[2204]: E1002 19:03:49.919957 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:50.405234 env[1745]: time="2023-10-02T19:03:50.405164796Z" level=info msg="CreateContainer within sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:03:50.426307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063327392.mount: Deactivated successfully. Oct 2 19:03:50.439737 env[1745]: time="2023-10-02T19:03:50.439655252Z" level=info msg="CreateContainer within sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\"" Oct 2 19:03:50.440892 env[1745]: time="2023-10-02T19:03:50.440844244Z" level=info msg="StartContainer for \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\"" Oct 2 19:03:50.487505 systemd[1]: Started cri-containerd-449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215.scope. Oct 2 19:03:50.524981 systemd[1]: cri-containerd-449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215.scope: Deactivated successfully. Oct 2 19:03:50.546552 env[1745]: time="2023-10-02T19:03:50.546466669Z" level=info msg="shim disconnected" id=449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215 Oct 2 19:03:50.546838 env[1745]: time="2023-10-02T19:03:50.546553460Z" level=warning msg="cleaning up after shim disconnected" id=449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215 namespace=k8s.io Oct 2 19:03:50.546838 env[1745]: time="2023-10-02T19:03:50.546577661Z" level=info msg="cleaning up dead shim" Oct 2 19:03:50.573223 env[1745]: time="2023-10-02T19:03:50.573152144Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:03:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2771 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:03:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:03:50.573688 env[1745]: time="2023-10-02T19:03:50.573602342Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 19:03:50.575111 env[1745]: time="2023-10-02T19:03:50.575054096Z" level=error msg="Failed to pipe stdout of container \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\"" error="reading from a closed fifo" Oct 2 19:03:50.577075 env[1745]: time="2023-10-02T19:03:50.576998604Z" level=error msg="Failed to pipe stderr of container \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\"" error="reading from a closed fifo" Oct 2 19:03:50.579271 env[1745]: time="2023-10-02T19:03:50.579200221Z" level=error msg="StartContainer for \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:03:50.579947 kubelet[2204]: E1002 19:03:50.579627 2204 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215" Oct 2 19:03:50.579947 kubelet[2204]: E1002 19:03:50.579805 2204 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:03:50.579947 kubelet[2204]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:03:50.579947 kubelet[2204]: rm /hostbin/cilium-mount Oct 2 19:03:50.580269 kubelet[2204]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckvxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:03:50.581216 kubelet[2204]: E1002 19:03:50.579888 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:03:50.920586 kubelet[2204]: E1002 19:03:50.920504 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:51.405423 kubelet[2204]: I1002 19:03:51.405382 2204 scope.go:115] "RemoveContainer" containerID="1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f" Oct 2 19:03:51.405943 kubelet[2204]: I1002 19:03:51.405887 2204 scope.go:115] "RemoveContainer" containerID="1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f" Oct 2 19:03:51.408301 env[1745]: time="2023-10-02T19:03:51.408254478Z" level=info msg="RemoveContainer for \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\"" Oct 2 19:03:51.411596 env[1745]: time="2023-10-02T19:03:51.411539771Z" level=info msg="RemoveContainer for \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\"" Oct 2 19:03:51.411753 env[1745]: time="2023-10-02T19:03:51.411688382Z" level=error msg="RemoveContainer for \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\" failed" error="failed to set removing state for container \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\": container is already in removing state" Oct 2 19:03:51.412179 kubelet[2204]: E1002 19:03:51.412146 2204 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\": container is already in removing state" containerID="1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f" Oct 2 19:03:51.412315 kubelet[2204]: E1002 19:03:51.412222 2204 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f": container is already in removing state; Skipping pod "cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)" Oct 2 19:03:51.412660 kubelet[2204]: E1002 19:03:51.412618 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:03:51.415996 env[1745]: time="2023-10-02T19:03:51.415923703Z" level=info msg="RemoveContainer for \"1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f\" returns successfully" Oct 2 19:03:51.422067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215-rootfs.mount: Deactivated successfully. Oct 2 19:03:51.894091 kubelet[2204]: E1002 19:03:51.894050 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:51.921367 kubelet[2204]: E1002 19:03:51.921323 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:52.039928 kubelet[2204]: E1002 19:03:52.039868 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:52.409742 kubelet[2204]: E1002 19:03:52.409690 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:03:52.837342 kubelet[2204]: W1002 19:03:52.837273 2204 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70ef4292_8675_4667_88cc_b5f4d4047034.slice/cri-containerd-1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f.scope WatchSource:0}: container "1f123b8dc6b91294aa7d1937aa2b4c7ae4045010849938b3afea4a4166f8943f" in namespace "k8s.io": not found Oct 2 19:03:52.921676 kubelet[2204]: E1002 19:03:52.921622 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:53.922757 kubelet[2204]: E1002 19:03:53.922693 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:54.923325 kubelet[2204]: E1002 19:03:54.923269 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:55.924228 kubelet[2204]: E1002 19:03:55.924169 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:55.946137 kubelet[2204]: W1002 19:03:55.945995 2204 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70ef4292_8675_4667_88cc_b5f4d4047034.slice/cri-containerd-449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215.scope WatchSource:0}: task 449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215 not found: not found Oct 2 19:03:56.925537 kubelet[2204]: E1002 19:03:56.925470 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:57.041844 kubelet[2204]: E1002 19:03:57.041797 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:03:57.926319 kubelet[2204]: E1002 19:03:57.926261 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:58.926736 kubelet[2204]: E1002 19:03:58.926698 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:03:59.927598 kubelet[2204]: E1002 19:03:59.927532 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:00.928085 kubelet[2204]: E1002 19:04:00.928049 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:01.928825 kubelet[2204]: E1002 19:04:01.928784 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:02.042324 kubelet[2204]: E1002 19:04:02.042292 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:02.930454 kubelet[2204]: E1002 19:04:02.930393 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:03.931256 kubelet[2204]: E1002 19:04:03.931194 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:04.931798 kubelet[2204]: E1002 19:04:04.931705 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:05.318894 env[1745]: time="2023-10-02T19:04:05.318838637Z" level=info msg="CreateContainer within sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:04:05.335663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9353475.mount: Deactivated successfully. Oct 2 19:04:05.348745 env[1745]: time="2023-10-02T19:04:05.346026412Z" level=info msg="CreateContainer within sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\"" Oct 2 19:04:05.348745 env[1745]: time="2023-10-02T19:04:05.347071289Z" level=info msg="StartContainer for \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\"" Oct 2 19:04:05.393829 systemd[1]: Started cri-containerd-9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7.scope. Oct 2 19:04:05.435879 systemd[1]: cri-containerd-9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7.scope: Deactivated successfully. Oct 2 19:04:05.460383 env[1745]: time="2023-10-02T19:04:05.460316244Z" level=info msg="shim disconnected" id=9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7 Oct 2 19:04:05.460813 env[1745]: time="2023-10-02T19:04:05.460768651Z" level=warning msg="cleaning up after shim disconnected" id=9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7 namespace=k8s.io Oct 2 19:04:05.460975 env[1745]: time="2023-10-02T19:04:05.460946749Z" level=info msg="cleaning up dead shim" Oct 2 19:04:05.486434 env[1745]: time="2023-10-02T19:04:05.486372295Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:04:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2811 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:04:05Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:04:05.487166 env[1745]: time="2023-10-02T19:04:05.487088639Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:04:05.488401 env[1745]: time="2023-10-02T19:04:05.487364303Z" level=error msg="Failed to pipe stdout of container \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\"" error="reading from a closed fifo" Oct 2 19:04:05.488623 env[1745]: time="2023-10-02T19:04:05.487995936Z" level=error msg="Failed to pipe stderr of container \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\"" error="reading from a closed fifo" Oct 2 19:04:05.490247 env[1745]: time="2023-10-02T19:04:05.490188296Z" level=error msg="StartContainer for \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:04:05.490699 kubelet[2204]: E1002 19:04:05.490658 2204 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7" Oct 2 19:04:05.490859 kubelet[2204]: E1002 19:04:05.490817 2204 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:04:05.490859 kubelet[2204]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:04:05.490859 kubelet[2204]: rm /hostbin/cilium-mount Oct 2 19:04:05.490859 kubelet[2204]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckvxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:04:05.491282 kubelet[2204]: E1002 19:04:05.490880 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:04:05.932672 kubelet[2204]: E1002 19:04:05.932596 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:06.330218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7-rootfs.mount: Deactivated successfully. Oct 2 19:04:06.443416 kubelet[2204]: I1002 19:04:06.443382 2204 scope.go:115] "RemoveContainer" containerID="449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215" Oct 2 19:04:06.443993 kubelet[2204]: I1002 19:04:06.443955 2204 scope.go:115] "RemoveContainer" containerID="449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215" Oct 2 19:04:06.445813 env[1745]: time="2023-10-02T19:04:06.445746377Z" level=info msg="RemoveContainer for \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\"" Oct 2 19:04:06.447416 env[1745]: time="2023-10-02T19:04:06.447309637Z" level=info msg="RemoveContainer for \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\"" Oct 2 19:04:06.447690 env[1745]: time="2023-10-02T19:04:06.447590220Z" level=error msg="RemoveContainer for \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\" failed" error="failed to set removing state for container \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\": container is already in removing state" Oct 2 19:04:06.449165 kubelet[2204]: E1002 19:04:06.449130 2204 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\": container is already in removing state" containerID="449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215" Oct 2 19:04:06.449440 kubelet[2204]: I1002 19:04:06.449417 2204 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215} err="rpc error: code = Unknown desc = failed to set removing state for container \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\": container is already in removing state" Oct 2 19:04:06.451333 env[1745]: time="2023-10-02T19:04:06.451236150Z" level=info msg="RemoveContainer for \"449a3db2da831431da0ec3d395a120ff2486e84ae0033e5b6a201dfc24a8e215\" returns successfully" Oct 2 19:04:06.452316 kubelet[2204]: E1002 19:04:06.452286 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:04:06.933553 kubelet[2204]: E1002 19:04:06.933513 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:07.044263 kubelet[2204]: E1002 19:04:07.044229 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:07.934342 kubelet[2204]: E1002 19:04:07.934274 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:08.566121 kubelet[2204]: W1002 19:04:08.566066 2204 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70ef4292_8675_4667_88cc_b5f4d4047034.slice/cri-containerd-9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7.scope WatchSource:0}: task 9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7 not found: not found Oct 2 19:04:08.935461 kubelet[2204]: E1002 19:04:08.935300 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:09.936252 kubelet[2204]: E1002 19:04:09.936203 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:10.937575 kubelet[2204]: E1002 19:04:10.937526 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:11.893996 kubelet[2204]: E1002 19:04:11.893954 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:11.939305 kubelet[2204]: E1002 19:04:11.939263 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:12.045613 kubelet[2204]: E1002 19:04:12.045531 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:12.939713 kubelet[2204]: E1002 19:04:12.939665 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:13.941139 kubelet[2204]: E1002 19:04:13.941092 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:14.942746 kubelet[2204]: E1002 19:04:14.942706 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:15.943635 kubelet[2204]: E1002 19:04:15.943552 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:16.944765 kubelet[2204]: E1002 19:04:16.944716 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:17.047241 kubelet[2204]: E1002 19:04:17.047206 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:17.945973 kubelet[2204]: E1002 19:04:17.945923 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:18.947340 kubelet[2204]: E1002 19:04:18.947259 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:19.315369 kubelet[2204]: E1002 19:04:19.315303 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:04:19.948470 kubelet[2204]: E1002 19:04:19.948402 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:20.948859 kubelet[2204]: E1002 19:04:20.948785 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:21.949641 kubelet[2204]: E1002 19:04:21.949568 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:22.048220 kubelet[2204]: E1002 19:04:22.048187 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:22.950597 kubelet[2204]: E1002 19:04:22.950550 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:23.952226 kubelet[2204]: E1002 19:04:23.952153 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:24.952914 kubelet[2204]: E1002 19:04:24.952846 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:25.953486 kubelet[2204]: E1002 19:04:25.953434 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:26.953872 kubelet[2204]: E1002 19:04:26.953804 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:27.050146 kubelet[2204]: E1002 19:04:27.050094 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:27.955004 kubelet[2204]: E1002 19:04:27.954944 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:28.955528 kubelet[2204]: E1002 19:04:28.955481 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:29.957166 kubelet[2204]: E1002 19:04:29.957094 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:30.958238 kubelet[2204]: E1002 19:04:30.958190 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:31.894740 kubelet[2204]: E1002 19:04:31.894694 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:31.959477 kubelet[2204]: E1002 19:04:31.959434 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:32.050924 kubelet[2204]: E1002 19:04:32.050876 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:32.319567 env[1745]: time="2023-10-02T19:04:32.319508231Z" level=info msg="CreateContainer within sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:04:32.338701 env[1745]: time="2023-10-02T19:04:32.338636677Z" level=info msg="CreateContainer within sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\"" Oct 2 19:04:32.339954 env[1745]: time="2023-10-02T19:04:32.339655365Z" level=info msg="StartContainer for \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\"" Oct 2 19:04:32.386503 systemd[1]: Started cri-containerd-dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4.scope. Oct 2 19:04:32.393502 systemd[1]: run-containerd-runc-k8s.io-dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4-runc.ZxbqjU.mount: Deactivated successfully. Oct 2 19:04:32.431643 systemd[1]: cri-containerd-dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4.scope: Deactivated successfully. Oct 2 19:04:32.439134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4-rootfs.mount: Deactivated successfully. Oct 2 19:04:32.454457 env[1745]: time="2023-10-02T19:04:32.454389532Z" level=info msg="shim disconnected" id=dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4 Oct 2 19:04:32.454827 env[1745]: time="2023-10-02T19:04:32.454794612Z" level=warning msg="cleaning up after shim disconnected" id=dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4 namespace=k8s.io Oct 2 19:04:32.455001 env[1745]: time="2023-10-02T19:04:32.454973342Z" level=info msg="cleaning up dead shim" Oct 2 19:04:32.480198 env[1745]: time="2023-10-02T19:04:32.480131101Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:04:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2852 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:04:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:04:32.480918 env[1745]: time="2023-10-02T19:04:32.480820352Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:04:32.481264 env[1745]: time="2023-10-02T19:04:32.481211259Z" level=error msg="Failed to pipe stdout of container \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\"" error="reading from a closed fifo" Oct 2 19:04:32.481476 env[1745]: time="2023-10-02T19:04:32.481414591Z" level=error msg="Failed to pipe stderr of container \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\"" error="reading from a closed fifo" Oct 2 19:04:32.483725 env[1745]: time="2023-10-02T19:04:32.483645555Z" level=error msg="StartContainer for \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:04:32.484250 kubelet[2204]: E1002 19:04:32.484213 2204 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4" Oct 2 19:04:32.484436 kubelet[2204]: E1002 19:04:32.484359 2204 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:04:32.484436 kubelet[2204]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:04:32.484436 kubelet[2204]: rm /hostbin/cilium-mount Oct 2 19:04:32.484436 kubelet[2204]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckvxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:04:32.484728 kubelet[2204]: E1002 19:04:32.484419 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:04:32.498330 kubelet[2204]: I1002 19:04:32.498275 2204 scope.go:115] "RemoveContainer" containerID="9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7" Oct 2 19:04:32.498809 kubelet[2204]: I1002 19:04:32.498778 2204 scope.go:115] "RemoveContainer" containerID="9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7" Oct 2 19:04:32.501083 env[1745]: time="2023-10-02T19:04:32.501019690Z" level=info msg="RemoveContainer for \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\"" Oct 2 19:04:32.501846 env[1745]: time="2023-10-02T19:04:32.501761037Z" level=info msg="RemoveContainer for \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\"" Oct 2 19:04:32.502278 env[1745]: time="2023-10-02T19:04:32.502202720Z" level=error msg="RemoveContainer for \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\" failed" error="failed to set removing state for container \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\": container is already in removing state" Oct 2 19:04:32.502948 kubelet[2204]: E1002 19:04:32.502847 2204 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\": container is already in removing state" containerID="9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7" Oct 2 19:04:32.502948 kubelet[2204]: E1002 19:04:32.502929 2204 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7": container is already in removing state; Skipping pod "cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)" Oct 2 19:04:32.503397 kubelet[2204]: E1002 19:04:32.503366 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:04:32.507610 env[1745]: time="2023-10-02T19:04:32.507549167Z" level=info msg="RemoveContainer for \"9665797c51c8e64675c88714fb46dd1be338819968c5b32d52ad3523481baed7\" returns successfully" Oct 2 19:04:32.960805 kubelet[2204]: E1002 19:04:32.960759 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:33.961595 kubelet[2204]: E1002 19:04:33.961521 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:34.962574 kubelet[2204]: E1002 19:04:34.962505 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:35.561244 kubelet[2204]: W1002 19:04:35.561171 2204 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70ef4292_8675_4667_88cc_b5f4d4047034.slice/cri-containerd-dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4.scope WatchSource:0}: task dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4 not found: not found Oct 2 19:04:35.962854 kubelet[2204]: E1002 19:04:35.962677 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:36.963795 kubelet[2204]: E1002 19:04:36.963716 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:37.052258 kubelet[2204]: E1002 19:04:37.052177 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:37.964776 kubelet[2204]: E1002 19:04:37.964703 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:38.965967 kubelet[2204]: E1002 19:04:38.965885 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:39.967032 kubelet[2204]: E1002 19:04:39.966947 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:40.968010 kubelet[2204]: E1002 19:04:40.967929 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:41.968301 kubelet[2204]: E1002 19:04:41.968255 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:42.054222 kubelet[2204]: E1002 19:04:42.054164 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:42.969478 kubelet[2204]: E1002 19:04:42.969403 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:43.970046 kubelet[2204]: E1002 19:04:43.969976 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:44.970864 kubelet[2204]: E1002 19:04:44.970773 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:45.972075 kubelet[2204]: E1002 19:04:45.972005 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:46.972480 kubelet[2204]: E1002 19:04:46.972413 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:47.056270 kubelet[2204]: E1002 19:04:47.056203 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:47.973574 kubelet[2204]: E1002 19:04:47.973503 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:48.316198 kubelet[2204]: E1002 19:04:48.316136 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:04:48.974559 kubelet[2204]: E1002 19:04:48.974510 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:49.975656 kubelet[2204]: E1002 19:04:49.975584 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:50.976784 kubelet[2204]: E1002 19:04:50.976718 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:51.894986 kubelet[2204]: E1002 19:04:51.894948 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:51.977574 kubelet[2204]: E1002 19:04:51.977513 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:52.057428 kubelet[2204]: E1002 19:04:52.057375 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:52.978453 kubelet[2204]: E1002 19:04:52.978390 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:53.978579 kubelet[2204]: E1002 19:04:53.978519 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:54.979137 kubelet[2204]: E1002 19:04:54.979090 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:55.980164 kubelet[2204]: E1002 19:04:55.980101 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:56.980519 kubelet[2204]: E1002 19:04:56.980451 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:57.058686 kubelet[2204]: E1002 19:04:57.058636 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:04:57.981109 kubelet[2204]: E1002 19:04:57.981044 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:58.981700 kubelet[2204]: E1002 19:04:58.981624 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:04:59.316546 kubelet[2204]: E1002 19:04:59.316284 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:04:59.982330 kubelet[2204]: E1002 19:04:59.982263 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:00.983140 kubelet[2204]: E1002 19:05:00.983095 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:01.984056 kubelet[2204]: E1002 19:05:01.983983 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:02.060345 kubelet[2204]: E1002 19:05:02.060284 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:02.984973 kubelet[2204]: E1002 19:05:02.984928 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:03.985815 kubelet[2204]: E1002 19:05:03.985777 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:04.987065 kubelet[2204]: E1002 19:05:04.987000 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:05.987824 kubelet[2204]: E1002 19:05:05.987750 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:06.988348 kubelet[2204]: E1002 19:05:06.988284 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:07.061717 kubelet[2204]: E1002 19:05:07.061674 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:07.988648 kubelet[2204]: E1002 19:05:07.988581 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:08.989653 kubelet[2204]: E1002 19:05:08.989614 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:09.991204 kubelet[2204]: E1002 19:05:09.991160 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:10.315961 kubelet[2204]: E1002 19:05:10.315677 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:05:10.992679 kubelet[2204]: E1002 19:05:10.992618 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:11.894517 kubelet[2204]: E1002 19:05:11.894451 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:11.992832 kubelet[2204]: E1002 19:05:11.992756 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:12.062489 kubelet[2204]: E1002 19:05:12.062431 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:12.993448 kubelet[2204]: E1002 19:05:12.993402 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:13.995251 kubelet[2204]: E1002 19:05:13.995182 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:14.996179 kubelet[2204]: E1002 19:05:14.996111 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:15.997273 kubelet[2204]: E1002 19:05:15.997230 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:16.998845 kubelet[2204]: E1002 19:05:16.998780 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:17.063809 kubelet[2204]: E1002 19:05:17.063754 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:17.999350 kubelet[2204]: E1002 19:05:17.999282 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:18.999885 kubelet[2204]: E1002 19:05:18.999844 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:20.000877 kubelet[2204]: E1002 19:05:20.000804 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:21.001845 kubelet[2204]: E1002 19:05:21.001783 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:21.319349 env[1745]: time="2023-10-02T19:05:21.319030451Z" level=info msg="CreateContainer within sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:05:21.336582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718438218.mount: Deactivated successfully. Oct 2 19:05:21.348313 env[1745]: time="2023-10-02T19:05:21.348228582Z" level=info msg="CreateContainer within sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828\"" Oct 2 19:05:21.349582 env[1745]: time="2023-10-02T19:05:21.349532019Z" level=info msg="StartContainer for \"359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828\"" Oct 2 19:05:21.400887 systemd[1]: Started cri-containerd-359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828.scope. Oct 2 19:05:21.447121 systemd[1]: cri-containerd-359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828.scope: Deactivated successfully. Oct 2 19:05:21.468036 env[1745]: time="2023-10-02T19:05:21.467958163Z" level=info msg="shim disconnected" id=359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828 Oct 2 19:05:21.468320 env[1745]: time="2023-10-02T19:05:21.468042884Z" level=warning msg="cleaning up after shim disconnected" id=359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828 namespace=k8s.io Oct 2 19:05:21.468320 env[1745]: time="2023-10-02T19:05:21.468065169Z" level=info msg="cleaning up dead shim" Oct 2 19:05:21.495724 env[1745]: time="2023-10-02T19:05:21.495637573Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:05:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2894 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:05:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:05:21.496222 env[1745]: time="2023-10-02T19:05:21.496130673Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:05:21.498043 env[1745]: time="2023-10-02T19:05:21.497985711Z" level=error msg="Failed to pipe stdout of container \"359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828\"" error="reading from a closed fifo" Oct 2 19:05:21.498221 env[1745]: time="2023-10-02T19:05:21.498161166Z" level=error msg="Failed to pipe stderr of container \"359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828\"" error="reading from a closed fifo" Oct 2 19:05:21.500565 env[1745]: time="2023-10-02T19:05:21.500486180Z" level=error msg="StartContainer for \"359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:05:21.501029 kubelet[2204]: E1002 19:05:21.500993 2204 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828" Oct 2 19:05:21.501188 kubelet[2204]: E1002 19:05:21.501139 2204 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:05:21.501188 kubelet[2204]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:05:21.501188 kubelet[2204]: rm /hostbin/cilium-mount Oct 2 19:05:21.501188 kubelet[2204]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckvxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:05:21.501508 kubelet[2204]: E1002 19:05:21.501202 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:05:21.611307 kubelet[2204]: I1002 19:05:21.609556 2204 scope.go:115] "RemoveContainer" containerID="dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4" Oct 2 19:05:21.611307 kubelet[2204]: I1002 19:05:21.609976 2204 scope.go:115] "RemoveContainer" containerID="dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4" Oct 2 19:05:21.614556 env[1745]: time="2023-10-02T19:05:21.613398858Z" level=info msg="RemoveContainer for \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\"" Oct 2 19:05:21.616665 env[1745]: time="2023-10-02T19:05:21.616577770Z" level=info msg="RemoveContainer for \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\"" Oct 2 19:05:21.616965 env[1745]: time="2023-10-02T19:05:21.616844078Z" level=error msg="RemoveContainer for \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\" failed" error="rpc error: code = NotFound desc = get container info: container \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\" in namespace \"k8s.io\": not found" Oct 2 19:05:21.617273 kubelet[2204]: E1002 19:05:21.617219 2204 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\" in namespace \"k8s.io\": not found" containerID="dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4" Oct 2 19:05:21.617410 kubelet[2204]: E1002 19:05:21.617297 2204 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4" in namespace "k8s.io": not found; Skipping pod "cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)" Oct 2 19:05:21.617779 kubelet[2204]: E1002 19:05:21.617727 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:05:21.618896 env[1745]: time="2023-10-02T19:05:21.618843203Z" level=info msg="RemoveContainer for \"dc0f5e8b6632cf90af74f0211cf833ac602e620828cc622ef7f423afeb36c6b4\" returns successfully" Oct 2 19:05:22.002712 kubelet[2204]: E1002 19:05:22.002679 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:22.065359 kubelet[2204]: E1002 19:05:22.065298 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:22.331946 systemd[1]: run-containerd-runc-k8s.io-359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828-runc.7jCOSl.mount: Deactivated successfully. Oct 2 19:05:22.332118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828-rootfs.mount: Deactivated successfully. Oct 2 19:05:23.003894 kubelet[2204]: E1002 19:05:23.003828 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:24.004479 kubelet[2204]: E1002 19:05:24.004417 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:24.573516 kubelet[2204]: W1002 19:05:24.573422 2204 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70ef4292_8675_4667_88cc_b5f4d4047034.slice/cri-containerd-359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828.scope WatchSource:0}: task 359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828 not found: not found Oct 2 19:05:25.005592 kubelet[2204]: E1002 19:05:25.005536 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:26.006449 kubelet[2204]: E1002 19:05:26.006377 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:27.007149 kubelet[2204]: E1002 19:05:27.007080 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:27.066451 kubelet[2204]: E1002 19:05:27.066384 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:28.007518 kubelet[2204]: E1002 19:05:28.007479 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:29.008498 kubelet[2204]: E1002 19:05:29.008434 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:30.008862 kubelet[2204]: E1002 19:05:30.008793 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:31.009548 kubelet[2204]: E1002 19:05:31.009505 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:31.894759 kubelet[2204]: E1002 19:05:31.894720 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:32.010352 kubelet[2204]: E1002 19:05:32.010294 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:32.067895 kubelet[2204]: E1002 19:05:32.067862 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:33.011532 kubelet[2204]: E1002 19:05:33.011459 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:34.011703 kubelet[2204]: E1002 19:05:34.011664 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:35.012571 kubelet[2204]: E1002 19:05:35.012500 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:36.013529 kubelet[2204]: E1002 19:05:36.013462 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:36.316688 kubelet[2204]: E1002 19:05:36.316295 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:05:37.013662 kubelet[2204]: E1002 19:05:37.013619 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:37.069620 kubelet[2204]: E1002 19:05:37.069588 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:38.015427 kubelet[2204]: E1002 19:05:38.015385 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:39.016427 kubelet[2204]: E1002 19:05:39.016355 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:40.017322 kubelet[2204]: E1002 19:05:40.017262 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:41.017695 kubelet[2204]: E1002 19:05:41.017654 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:42.019050 kubelet[2204]: E1002 19:05:42.019009 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:42.070361 kubelet[2204]: E1002 19:05:42.070321 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:43.020563 kubelet[2204]: E1002 19:05:43.020523 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:44.022252 kubelet[2204]: E1002 19:05:44.022208 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:45.023284 kubelet[2204]: E1002 19:05:45.023230 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:46.024017 kubelet[2204]: E1002 19:05:46.023954 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:47.024549 kubelet[2204]: E1002 19:05:47.024507 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:47.071840 kubelet[2204]: E1002 19:05:47.071786 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:48.025709 kubelet[2204]: E1002 19:05:48.025649 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:49.026013 kubelet[2204]: E1002 19:05:49.025969 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:50.027034 kubelet[2204]: E1002 19:05:50.026968 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:50.315914 kubelet[2204]: E1002 19:05:50.315510 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:05:51.028030 kubelet[2204]: E1002 19:05:51.027967 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:51.893994 kubelet[2204]: E1002 19:05:51.893953 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:52.028614 kubelet[2204]: E1002 19:05:52.028552 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:52.073035 kubelet[2204]: E1002 19:05:52.072981 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:53.029328 kubelet[2204]: E1002 19:05:53.029256 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:54.030014 kubelet[2204]: E1002 19:05:54.029972 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:55.030973 kubelet[2204]: E1002 19:05:55.030929 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:56.032246 kubelet[2204]: E1002 19:05:56.032188 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:57.032405 kubelet[2204]: E1002 19:05:57.032335 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:57.074300 kubelet[2204]: E1002 19:05:57.074225 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:05:58.032730 kubelet[2204]: E1002 19:05:58.032670 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:05:59.033046 kubelet[2204]: E1002 19:05:59.032981 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:00.033681 kubelet[2204]: E1002 19:06:00.033629 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:01.035347 kubelet[2204]: E1002 19:06:01.035282 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:02.036465 kubelet[2204]: E1002 19:06:02.036422 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:02.075554 kubelet[2204]: E1002 19:06:02.075503 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:03.037348 kubelet[2204]: E1002 19:06:03.037283 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:04.037565 kubelet[2204]: E1002 19:06:04.037504 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:04.316405 kubelet[2204]: E1002 19:06:04.316061 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:06:05.038566 kubelet[2204]: E1002 19:06:05.038504 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:06.039261 kubelet[2204]: E1002 19:06:06.039190 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:07.040352 kubelet[2204]: E1002 19:06:07.040288 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:07.076630 kubelet[2204]: E1002 19:06:07.076597 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:08.041046 kubelet[2204]: E1002 19:06:08.040984 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:09.041823 kubelet[2204]: E1002 19:06:09.041751 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:10.042723 kubelet[2204]: E1002 19:06:10.042661 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:11.043285 kubelet[2204]: E1002 19:06:11.043223 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:11.894267 kubelet[2204]: E1002 19:06:11.894230 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:12.043538 kubelet[2204]: E1002 19:06:12.043496 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:12.078397 kubelet[2204]: E1002 19:06:12.078345 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:13.044319 kubelet[2204]: E1002 19:06:13.044282 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:14.045936 kubelet[2204]: E1002 19:06:14.045880 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:15.046879 kubelet[2204]: E1002 19:06:15.046838 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:16.048160 kubelet[2204]: E1002 19:06:16.048116 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:17.049704 kubelet[2204]: E1002 19:06:17.049641 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:17.079604 kubelet[2204]: E1002 19:06:17.079555 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:17.316060 kubelet[2204]: E1002 19:06:17.315889 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:06:18.050301 kubelet[2204]: E1002 19:06:18.050232 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:19.050844 kubelet[2204]: E1002 19:06:19.050778 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:20.051775 kubelet[2204]: E1002 19:06:20.051733 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:21.052849 kubelet[2204]: E1002 19:06:21.052806 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:22.053771 kubelet[2204]: E1002 19:06:22.053719 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:22.081451 kubelet[2204]: E1002 19:06:22.081398 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:23.055329 kubelet[2204]: E1002 19:06:23.055258 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:24.055720 kubelet[2204]: E1002 19:06:24.055658 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:25.056351 kubelet[2204]: E1002 19:06:25.056305 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:26.057830 kubelet[2204]: E1002 19:06:26.057789 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:27.059170 kubelet[2204]: E1002 19:06:27.059125 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:27.082348 kubelet[2204]: E1002 19:06:27.082318 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:28.060197 kubelet[2204]: E1002 19:06:28.060155 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:29.061498 kubelet[2204]: E1002 19:06:29.061430 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:30.062254 kubelet[2204]: E1002 19:06:30.062211 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:31.063265 kubelet[2204]: E1002 19:06:31.063199 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:31.316509 kubelet[2204]: E1002 19:06:31.316364 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-t6jsn_kube-system(70ef4292-8675-4667-88cc-b5f4d4047034)\"" pod="kube-system/cilium-t6jsn" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 Oct 2 19:06:31.894296 kubelet[2204]: E1002 19:06:31.894230 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:32.063419 kubelet[2204]: E1002 19:06:32.063369 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:32.083885 kubelet[2204]: E1002 19:06:32.083834 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:33.064846 kubelet[2204]: E1002 19:06:33.064796 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:34.065858 kubelet[2204]: E1002 19:06:34.065817 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:35.067314 kubelet[2204]: E1002 19:06:35.067231 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:36.068028 kubelet[2204]: E1002 19:06:36.067955 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:37.068531 kubelet[2204]: E1002 19:06:37.068479 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:37.085280 kubelet[2204]: E1002 19:06:37.085246 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:38.070278 kubelet[2204]: E1002 19:06:38.070212 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:39.071259 kubelet[2204]: E1002 19:06:39.071222 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:40.072686 kubelet[2204]: E1002 19:06:40.072641 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:40.297844 env[1745]: time="2023-10-02T19:06:40.297789407Z" level=info msg="StopPodSandbox for \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\"" Oct 2 19:06:40.298546 env[1745]: time="2023-10-02T19:06:40.298501783Z" level=info msg="Container to stop \"359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:06:40.300950 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef-shm.mount: Deactivated successfully. Oct 2 19:06:40.323000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:06:40.334957 kernel: kauditd_printk_skb: 285 callbacks suppressed Oct 2 19:06:40.335143 kernel: audit: type=1334 audit(1696273600.323:733): prog-id=80 op=UNLOAD Oct 2 19:06:40.335228 kernel: audit: type=1334 audit(1696273600.331:734): prog-id=83 op=UNLOAD Oct 2 19:06:40.331000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:06:40.323953 systemd[1]: cri-containerd-7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef.scope: Deactivated successfully. Oct 2 19:06:40.383346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef-rootfs.mount: Deactivated successfully. Oct 2 19:06:40.396697 env[1745]: time="2023-10-02T19:06:40.396634871Z" level=info msg="shim disconnected" id=7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef Oct 2 19:06:40.397098 env[1745]: time="2023-10-02T19:06:40.397061394Z" level=warning msg="cleaning up after shim disconnected" id=7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef namespace=k8s.io Oct 2 19:06:40.397232 env[1745]: time="2023-10-02T19:06:40.397202004Z" level=info msg="cleaning up dead shim" Oct 2 19:06:40.423560 env[1745]: time="2023-10-02T19:06:40.423501117Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:06:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2936 runtime=io.containerd.runc.v2\n" Oct 2 19:06:40.424354 env[1745]: time="2023-10-02T19:06:40.424309041Z" level=info msg="TearDown network for sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" successfully" Oct 2 19:06:40.424521 env[1745]: time="2023-10-02T19:06:40.424487381Z" level=info msg="StopPodSandbox for \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" returns successfully" Oct 2 19:06:40.468633 kubelet[2204]: I1002 19:06:40.468568 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-xtables-lock\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.468847 kubelet[2204]: I1002 19:06:40.468650 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckvxz\" (UniqueName: \"kubernetes.io/projected/70ef4292-8675-4667-88cc-b5f4d4047034-kube-api-access-ckvxz\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.468847 kubelet[2204]: I1002 19:06:40.468695 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-lib-modules\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.468847 kubelet[2204]: I1002 19:06:40.468738 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70ef4292-8675-4667-88cc-b5f4d4047034-hubble-tls\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.468847 kubelet[2204]: I1002 19:06:40.468786 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-config-path\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.468847 kubelet[2204]: I1002 19:06:40.468827 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-host-proc-sys-net\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.469201 kubelet[2204]: I1002 19:06:40.468865 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cni-path\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.469201 kubelet[2204]: I1002 19:06:40.468952 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-run\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.469201 kubelet[2204]: I1002 19:06:40.468996 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-bpf-maps\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.469201 kubelet[2204]: I1002 19:06:40.469057 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70ef4292-8675-4667-88cc-b5f4d4047034-clustermesh-secrets\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.469201 kubelet[2204]: I1002 19:06:40.469105 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-hostproc\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.469201 kubelet[2204]: I1002 19:06:40.469145 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-cgroup\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.469555 kubelet[2204]: I1002 19:06:40.469182 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-etc-cni-netd\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.469555 kubelet[2204]: I1002 19:06:40.469228 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-host-proc-sys-kernel\") pod \"70ef4292-8675-4667-88cc-b5f4d4047034\" (UID: \"70ef4292-8675-4667-88cc-b5f4d4047034\") " Oct 2 19:06:40.469555 kubelet[2204]: I1002 19:06:40.469302 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:40.469555 kubelet[2204]: I1002 19:06:40.469359 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:40.471923 kubelet[2204]: I1002 19:06:40.469843 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:40.471923 kubelet[2204]: I1002 19:06:40.469948 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:40.471923 kubelet[2204]: I1002 19:06:40.470230 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:40.471923 kubelet[2204]: I1002 19:06:40.470579 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-hostproc" (OuterVolumeSpecName: "hostproc") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:40.471923 kubelet[2204]: I1002 19:06:40.470633 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:40.472330 kubelet[2204]: I1002 19:06:40.470679 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:40.472330 kubelet[2204]: I1002 19:06:40.470726 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:40.472330 kubelet[2204]: W1002 19:06:40.470985 2204 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/70ef4292-8675-4667-88cc-b5f4d4047034/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:06:40.472330 kubelet[2204]: I1002 19:06:40.472303 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cni-path" (OuterVolumeSpecName: "cni-path") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:40.476585 kubelet[2204]: I1002 19:06:40.476531 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:06:40.485989 systemd[1]: var-lib-kubelet-pods-70ef4292\x2d8675\x2d4667\x2d88cc\x2db5f4d4047034-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dckvxz.mount: Deactivated successfully. Oct 2 19:06:40.488949 systemd[1]: var-lib-kubelet-pods-70ef4292\x2d8675\x2d4667\x2d88cc\x2db5f4d4047034-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:06:40.493736 kubelet[2204]: I1002 19:06:40.493669 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ef4292-8675-4667-88cc-b5f4d4047034-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:06:40.494148 kubelet[2204]: I1002 19:06:40.494100 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ef4292-8675-4667-88cc-b5f4d4047034-kube-api-access-ckvxz" (OuterVolumeSpecName: "kube-api-access-ckvxz") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "kube-api-access-ckvxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:06:40.497085 systemd[1]: var-lib-kubelet-pods-70ef4292\x2d8675\x2d4667\x2d88cc\x2db5f4d4047034-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:06:40.499981 kubelet[2204]: I1002 19:06:40.499934 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ef4292-8675-4667-88cc-b5f4d4047034-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "70ef4292-8675-4667-88cc-b5f4d4047034" (UID: "70ef4292-8675-4667-88cc-b5f4d4047034"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:06:40.570136 kubelet[2204]: I1002 19:06:40.570101 2204 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-bpf-maps\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.570321 kubelet[2204]: I1002 19:06:40.570299 2204 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70ef4292-8675-4667-88cc-b5f4d4047034-clustermesh-secrets\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.570469 kubelet[2204]: I1002 19:06:40.570448 2204 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-config-path\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.570596 kubelet[2204]: I1002 19:06:40.570576 2204 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-host-proc-sys-net\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.570728 kubelet[2204]: I1002 19:06:40.570708 2204 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cni-path\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.570845 kubelet[2204]: I1002 19:06:40.570825 2204 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-run\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.571001 kubelet[2204]: I1002 19:06:40.570981 2204 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-host-proc-sys-kernel\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.571129 kubelet[2204]: I1002 19:06:40.571110 2204 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-hostproc\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.571251 kubelet[2204]: I1002 19:06:40.571232 2204 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-cilium-cgroup\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.571374 kubelet[2204]: I1002 19:06:40.571354 2204 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-etc-cni-netd\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.571519 kubelet[2204]: I1002 19:06:40.571499 2204 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-lib-modules\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.571645 kubelet[2204]: I1002 19:06:40.571625 2204 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70ef4292-8675-4667-88cc-b5f4d4047034-hubble-tls\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.571761 kubelet[2204]: I1002 19:06:40.571742 2204 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70ef4292-8675-4667-88cc-b5f4d4047034-xtables-lock\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.571885 kubelet[2204]: I1002 19:06:40.571866 2204 reconciler.go:399] "Volume detached for volume \"kube-api-access-ckvxz\" (UniqueName: \"kubernetes.io/projected/70ef4292-8675-4667-88cc-b5f4d4047034-kube-api-access-ckvxz\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:40.764628 kubelet[2204]: I1002 19:06:40.764595 2204 scope.go:115] "RemoveContainer" containerID="359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828" Oct 2 19:06:40.768805 env[1745]: time="2023-10-02T19:06:40.768173259Z" level=info msg="RemoveContainer for \"359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828\"" Oct 2 19:06:40.774138 env[1745]: time="2023-10-02T19:06:40.774052492Z" level=info msg="RemoveContainer for \"359dde3aa807f87300a85d930cc7d446558bd0c0013bee6428fbae676bd4c828\" returns successfully" Oct 2 19:06:40.782559 systemd[1]: Removed slice kubepods-burstable-pod70ef4292_8675_4667_88cc_b5f4d4047034.slice. Oct 2 19:06:40.815743 kubelet[2204]: I1002 19:06:40.815695 2204 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:06:40.815979 kubelet[2204]: E1002 19:06:40.815769 2204 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="70ef4292-8675-4667-88cc-b5f4d4047034" containerName="mount-cgroup" Oct 2 19:06:40.815979 kubelet[2204]: E1002 19:06:40.815794 2204 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="70ef4292-8675-4667-88cc-b5f4d4047034" containerName="mount-cgroup" Oct 2 19:06:40.815979 kubelet[2204]: E1002 19:06:40.815811 2204 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="70ef4292-8675-4667-88cc-b5f4d4047034" containerName="mount-cgroup" Oct 2 19:06:40.815979 kubelet[2204]: I1002 19:06:40.815846 2204 memory_manager.go:345] "RemoveStaleState removing state" podUID="70ef4292-8675-4667-88cc-b5f4d4047034" containerName="mount-cgroup" Oct 2 19:06:40.815979 kubelet[2204]: I1002 19:06:40.815864 2204 memory_manager.go:345] "RemoveStaleState removing state" podUID="70ef4292-8675-4667-88cc-b5f4d4047034" containerName="mount-cgroup" Oct 2 19:06:40.815979 kubelet[2204]: I1002 19:06:40.815882 2204 memory_manager.go:345] "RemoveStaleState removing state" podUID="70ef4292-8675-4667-88cc-b5f4d4047034" containerName="mount-cgroup" Oct 2 19:06:40.815979 kubelet[2204]: E1002 19:06:40.815961 2204 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="70ef4292-8675-4667-88cc-b5f4d4047034" containerName="mount-cgroup" Oct 2 19:06:40.815979 kubelet[2204]: E1002 19:06:40.815982 2204 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="70ef4292-8675-4667-88cc-b5f4d4047034" containerName="mount-cgroup" Oct 2 19:06:40.816479 kubelet[2204]: I1002 19:06:40.816014 2204 memory_manager.go:345] "RemoveStaleState removing state" podUID="70ef4292-8675-4667-88cc-b5f4d4047034" containerName="mount-cgroup" Oct 2 19:06:40.816479 kubelet[2204]: I1002 19:06:40.816031 2204 memory_manager.go:345] "RemoveStaleState removing state" podUID="70ef4292-8675-4667-88cc-b5f4d4047034" containerName="mount-cgroup" Oct 2 19:06:40.825696 systemd[1]: Created slice kubepods-burstable-pod3874ac1c_007c_40d9_9459_1c7ff05a2f15.slice. Oct 2 19:06:40.879538 kubelet[2204]: I1002 19:06:40.879460 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-hostproc\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.879725 kubelet[2204]: I1002 19:06:40.879559 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-lib-modules\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.879725 kubelet[2204]: I1002 19:06:40.879637 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-host-proc-sys-net\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.879725 kubelet[2204]: I1002 19:06:40.879715 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-host-proc-sys-kernel\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.879981 kubelet[2204]: I1002 19:06:40.879787 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmzt\" (UniqueName: \"kubernetes.io/projected/3874ac1c-007c-40d9-9459-1c7ff05a2f15-kube-api-access-csmzt\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.879981 kubelet[2204]: I1002 19:06:40.879965 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3874ac1c-007c-40d9-9459-1c7ff05a2f15-hubble-tls\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.880128 kubelet[2204]: I1002 19:06:40.880070 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-cgroup\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.880206 kubelet[2204]: I1002 19:06:40.880163 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cni-path\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.880334 kubelet[2204]: I1002 19:06:40.880294 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3874ac1c-007c-40d9-9459-1c7ff05a2f15-clustermesh-secrets\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.880440 kubelet[2204]: I1002 19:06:40.880410 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-run\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.880575 kubelet[2204]: I1002 19:06:40.880512 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-bpf-maps\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.880668 kubelet[2204]: I1002 19:06:40.880641 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-etc-cni-netd\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.880752 kubelet[2204]: I1002 19:06:40.880741 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-xtables-lock\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:40.880882 kubelet[2204]: I1002 19:06:40.880841 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-config-path\") pod \"cilium-4xk2b\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " pod="kube-system/cilium-4xk2b" Oct 2 19:06:41.073549 kubelet[2204]: E1002 19:06:41.073398 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:41.142072 env[1745]: time="2023-10-02T19:06:41.142006163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4xk2b,Uid:3874ac1c-007c-40d9-9459-1c7ff05a2f15,Namespace:kube-system,Attempt:0,}" Oct 2 19:06:41.174999 env[1745]: time="2023-10-02T19:06:41.174841504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:06:41.174999 env[1745]: time="2023-10-02T19:06:41.174945741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:06:41.175523 env[1745]: time="2023-10-02T19:06:41.174973054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:06:41.175886 env[1745]: time="2023-10-02T19:06:41.175784771Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0 pid=2963 runtime=io.containerd.runc.v2 Oct 2 19:06:41.208658 systemd[1]: Started cri-containerd-8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0.scope. Oct 2 19:06:41.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261251 kernel: audit: type=1400 audit(1696273601.244:735): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261452 kernel: audit: type=1400 audit(1696273601.244:736): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261547 kernel: audit: type=1400 audit(1696273601.244:737): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.276635 kernel: audit: type=1400 audit(1696273601.244:738): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.276840 kernel: audit: type=1400 audit(1696273601.244:739): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.284686 kernel: audit: type=1400 audit(1696273601.244:740): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.305689 kernel: audit: type=1400 audit(1696273601.244:741): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.312986 kernel: audit: type=1400 audit(1696273601.244:742): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.245000 audit: BPF prog-id=87 op=LOAD Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2963 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:41.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839333265356161333530303464363936633031393833643435623563 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2963 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:41.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839333265356161333530303464363936633031393833643435623563 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.252000 audit: BPF prog-id=88 op=LOAD Oct 2 19:06:41.252000 audit[2973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2963 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:41.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839333265356161333530303464363936633031393833643435623563 Oct 2 19:06:41.253000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.253000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.253000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.253000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.253000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.253000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.253000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.253000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.253000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.253000 audit: BPF prog-id=89 op=LOAD Oct 2 19:06:41.253000 audit[2973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2963 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:41.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839333265356161333530303464363936633031393833643435623563 Oct 2 19:06:41.260000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:06:41.260000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:06:41.261000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261000 audit[2973]: AVC avc: denied { perfmon } for pid=2973 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261000 audit[2973]: AVC avc: denied { bpf } for pid=2973 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:41.261000 audit: BPF prog-id=90 op=LOAD Oct 2 19:06:41.261000 audit[2973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2963 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:41.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839333265356161333530303464363936633031393833643435623563 Oct 2 19:06:41.346042 env[1745]: time="2023-10-02T19:06:41.345974433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4xk2b,Uid:3874ac1c-007c-40d9-9459-1c7ff05a2f15,Namespace:kube-system,Attempt:0,} returns sandbox id \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\"" Oct 2 19:06:41.351555 env[1745]: time="2023-10-02T19:06:41.351401018Z" level=info msg="CreateContainer within sandbox \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:06:41.373167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount501008659.mount: Deactivated successfully. Oct 2 19:06:41.377703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144105444.mount: Deactivated successfully. Oct 2 19:06:41.387029 env[1745]: time="2023-10-02T19:06:41.386942949Z" level=info msg="CreateContainer within sandbox \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a\"" Oct 2 19:06:41.387865 env[1745]: time="2023-10-02T19:06:41.387770939Z" level=info msg="StartContainer for \"940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a\"" Oct 2 19:06:41.442356 systemd[1]: Started cri-containerd-940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a.scope. Oct 2 19:06:41.481344 systemd[1]: cri-containerd-940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a.scope: Deactivated successfully. Oct 2 19:06:41.515370 env[1745]: time="2023-10-02T19:06:41.515301508Z" level=info msg="shim disconnected" id=940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a Oct 2 19:06:41.515729 env[1745]: time="2023-10-02T19:06:41.515693266Z" level=warning msg="cleaning up after shim disconnected" id=940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a namespace=k8s.io Oct 2 19:06:41.515856 env[1745]: time="2023-10-02T19:06:41.515828116Z" level=info msg="cleaning up dead shim" Oct 2 19:06:41.543324 env[1745]: time="2023-10-02T19:06:41.543251849Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:06:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3020 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:06:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:06:41.543794 env[1745]: time="2023-10-02T19:06:41.543709550Z" level=error msg="copy shim log" error="read /proc/self/fd/29: file already closed" Oct 2 19:06:41.544184 env[1745]: time="2023-10-02T19:06:41.544127613Z" level=error msg="Failed to pipe stdout of container \"940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a\"" error="reading from a closed fifo" Oct 2 19:06:41.544314 env[1745]: time="2023-10-02T19:06:41.544266015Z" level=error msg="Failed to pipe stderr of container \"940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a\"" error="reading from a closed fifo" Oct 2 19:06:41.546560 env[1745]: time="2023-10-02T19:06:41.546477091Z" level=error msg="StartContainer for \"940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:06:41.547384 kubelet[2204]: E1002 19:06:41.546892 2204 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a" Oct 2 19:06:41.547384 kubelet[2204]: E1002 19:06:41.547063 2204 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:06:41.547384 kubelet[2204]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:06:41.547384 kubelet[2204]: rm /hostbin/cilium-mount Oct 2 19:06:41.547762 kubelet[2204]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-csmzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-4xk2b_kube-system(3874ac1c-007c-40d9-9459-1c7ff05a2f15): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:06:41.548037 kubelet[2204]: E1002 19:06:41.547120 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4xk2b" podUID=3874ac1c-007c-40d9-9459-1c7ff05a2f15 Oct 2 19:06:41.769466 env[1745]: time="2023-10-02T19:06:41.769403541Z" level=info msg="StopPodSandbox for \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\"" Oct 2 19:06:41.769672 env[1745]: time="2023-10-02T19:06:41.769497553Z" level=info msg="Container to stop \"940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:06:41.788000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:06:41.788446 systemd[1]: cri-containerd-8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0.scope: Deactivated successfully. Oct 2 19:06:41.791000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:06:41.853184 env[1745]: time="2023-10-02T19:06:41.853118093Z" level=info msg="shim disconnected" id=8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0 Oct 2 19:06:41.853723 env[1745]: time="2023-10-02T19:06:41.853684783Z" level=warning msg="cleaning up after shim disconnected" id=8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0 namespace=k8s.io Oct 2 19:06:41.853888 env[1745]: time="2023-10-02T19:06:41.853858995Z" level=info msg="cleaning up dead shim" Oct 2 19:06:41.880954 env[1745]: time="2023-10-02T19:06:41.880863213Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:06:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3053 runtime=io.containerd.runc.v2\n" Oct 2 19:06:41.881478 env[1745]: time="2023-10-02T19:06:41.881421946Z" level=info msg="TearDown network for sandbox \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\" successfully" Oct 2 19:06:41.881599 env[1745]: time="2023-10-02T19:06:41.881477749Z" level=info msg="StopPodSandbox for \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\" returns successfully" Oct 2 19:06:41.988173 kubelet[2204]: I1002 19:06:41.988103 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-host-proc-sys-kernel\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.988616 kubelet[2204]: I1002 19:06:41.988465 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:41.988768 kubelet[2204]: I1002 19:06:41.988746 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3874ac1c-007c-40d9-9459-1c7ff05a2f15-hubble-tls\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.989299 kubelet[2204]: I1002 19:06:41.989263 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cni-path\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.989563 kubelet[2204]: I1002 19:06:41.989490 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cni-path" (OuterVolumeSpecName: "cni-path") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:41.990361 kubelet[2204]: I1002 19:06:41.989819 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3874ac1c-007c-40d9-9459-1c7ff05a2f15-clustermesh-secrets\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.990917 kubelet[2204]: I1002 19:06:41.990679 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-run\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.991158 kubelet[2204]: I1002 19:06:41.991113 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-bpf-maps\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.991406 kubelet[2204]: I1002 19:06:41.991386 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-hostproc\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.991604 kubelet[2204]: I1002 19:06:41.991583 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-lib-modules\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.991792 kubelet[2204]: I1002 19:06:41.991759 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-etc-cni-netd\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.991994 kubelet[2204]: I1002 19:06:41.991972 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-config-path\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.992198 kubelet[2204]: I1002 19:06:41.992162 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-host-proc-sys-net\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.992380 kubelet[2204]: I1002 19:06:41.992345 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csmzt\" (UniqueName: \"kubernetes.io/projected/3874ac1c-007c-40d9-9459-1c7ff05a2f15-kube-api-access-csmzt\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.992542 kubelet[2204]: I1002 19:06:41.992522 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-cgroup\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.992713 kubelet[2204]: I1002 19:06:41.992692 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-xtables-lock\") pod \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\" (UID: \"3874ac1c-007c-40d9-9459-1c7ff05a2f15\") " Oct 2 19:06:41.992910 kubelet[2204]: I1002 19:06:41.992866 2204 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-host-proc-sys-kernel\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:41.993096 kubelet[2204]: I1002 19:06:41.993072 2204 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cni-path\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:41.993310 kubelet[2204]: I1002 19:06:41.993253 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:41.994313 kubelet[2204]: I1002 19:06:41.991301 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:41.994522 kubelet[2204]: I1002 19:06:41.990744 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:41.994656 kubelet[2204]: I1002 19:06:41.993496 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-hostproc" (OuterVolumeSpecName: "hostproc") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:41.994787 kubelet[2204]: I1002 19:06:41.993539 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:41.994948 kubelet[2204]: I1002 19:06:41.993571 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:41.995111 kubelet[2204]: W1002 19:06:41.993841 2204 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/3874ac1c-007c-40d9-9459-1c7ff05a2f15/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:06:41.995243 kubelet[2204]: I1002 19:06:41.993865 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:41.995370 kubelet[2204]: I1002 19:06:41.994235 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:06:41.998648 kubelet[2204]: I1002 19:06:41.998599 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3874ac1c-007c-40d9-9459-1c7ff05a2f15-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:06:42.003025 kubelet[2204]: I1002 19:06:42.002971 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:06:42.006811 kubelet[2204]: I1002 19:06:42.006738 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3874ac1c-007c-40d9-9459-1c7ff05a2f15-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:06:42.012753 kubelet[2204]: I1002 19:06:42.012701 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3874ac1c-007c-40d9-9459-1c7ff05a2f15-kube-api-access-csmzt" (OuterVolumeSpecName: "kube-api-access-csmzt") pod "3874ac1c-007c-40d9-9459-1c7ff05a2f15" (UID: "3874ac1c-007c-40d9-9459-1c7ff05a2f15"). InnerVolumeSpecName "kube-api-access-csmzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:06:42.073830 kubelet[2204]: E1002 19:06:42.073685 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:42.086687 kubelet[2204]: E1002 19:06:42.086645 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:42.094391 kubelet[2204]: I1002 19:06:42.094346 2204 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-hostproc\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.094391 kubelet[2204]: I1002 19:06:42.094391 2204 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-lib-modules\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.094563 kubelet[2204]: I1002 19:06:42.094417 2204 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-etc-cni-netd\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.094563 kubelet[2204]: I1002 19:06:42.094441 2204 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-config-path\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.094563 kubelet[2204]: I1002 19:06:42.094465 2204 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-cgroup\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.094563 kubelet[2204]: I1002 19:06:42.094488 2204 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-xtables-lock\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.094563 kubelet[2204]: I1002 19:06:42.094511 2204 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-host-proc-sys-net\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.094563 kubelet[2204]: I1002 19:06:42.094534 2204 reconciler.go:399] "Volume detached for volume \"kube-api-access-csmzt\" (UniqueName: \"kubernetes.io/projected/3874ac1c-007c-40d9-9459-1c7ff05a2f15-kube-api-access-csmzt\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.094563 kubelet[2204]: I1002 19:06:42.094556 2204 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3874ac1c-007c-40d9-9459-1c7ff05a2f15-hubble-tls\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.095611 kubelet[2204]: I1002 19:06:42.094578 2204 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-bpf-maps\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.095611 kubelet[2204]: I1002 19:06:42.094605 2204 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3874ac1c-007c-40d9-9459-1c7ff05a2f15-clustermesh-secrets\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.095611 kubelet[2204]: I1002 19:06:42.094629 2204 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3874ac1c-007c-40d9-9459-1c7ff05a2f15-cilium-run\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:06:42.302259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a-rootfs.mount: Deactivated successfully. Oct 2 19:06:42.302421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0-rootfs.mount: Deactivated successfully. Oct 2 19:06:42.302547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0-shm.mount: Deactivated successfully. Oct 2 19:06:42.302679 systemd[1]: var-lib-kubelet-pods-3874ac1c\x2d007c\x2d40d9\x2d9459\x2d1c7ff05a2f15-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcsmzt.mount: Deactivated successfully. Oct 2 19:06:42.302810 systemd[1]: var-lib-kubelet-pods-3874ac1c\x2d007c\x2d40d9\x2d9459\x2d1c7ff05a2f15-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:06:42.302956 systemd[1]: var-lib-kubelet-pods-3874ac1c\x2d007c\x2d40d9\x2d9459\x2d1c7ff05a2f15-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:06:42.317979 env[1745]: time="2023-10-02T19:06:42.317517530Z" level=info msg="StopPodSandbox for \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\"" Oct 2 19:06:42.317979 env[1745]: time="2023-10-02T19:06:42.317654780Z" level=info msg="TearDown network for sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" successfully" Oct 2 19:06:42.317979 env[1745]: time="2023-10-02T19:06:42.317708602Z" level=info msg="StopPodSandbox for \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" returns successfully" Oct 2 19:06:42.320518 kubelet[2204]: I1002 19:06:42.320463 2204 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=70ef4292-8675-4667-88cc-b5f4d4047034 path="/var/lib/kubelet/pods/70ef4292-8675-4667-88cc-b5f4d4047034/volumes" Oct 2 19:06:42.331079 systemd[1]: Removed slice kubepods-burstable-pod3874ac1c_007c_40d9_9459_1c7ff05a2f15.slice. Oct 2 19:06:42.775363 kubelet[2204]: I1002 19:06:42.775317 2204 scope.go:115] "RemoveContainer" containerID="940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a" Oct 2 19:06:42.777981 env[1745]: time="2023-10-02T19:06:42.777628572Z" level=info msg="RemoveContainer for \"940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a\"" Oct 2 19:06:42.783306 env[1745]: time="2023-10-02T19:06:42.783248954Z" level=info msg="RemoveContainer for \"940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a\" returns successfully" Oct 2 19:06:43.076084 kubelet[2204]: E1002 19:06:43.075696 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:44.076235 kubelet[2204]: E1002 19:06:44.076184 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:44.319976 kubelet[2204]: I1002 19:06:44.319920 2204 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3874ac1c-007c-40d9-9459-1c7ff05a2f15 path="/var/lib/kubelet/pods/3874ac1c-007c-40d9-9459-1c7ff05a2f15/volumes" Oct 2 19:06:44.621631 kubelet[2204]: W1002 19:06:44.621585 2204 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3874ac1c_007c_40d9_9459_1c7ff05a2f15.slice/cri-containerd-940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a.scope WatchSource:0}: container "940a24ed66e01314cd18b80973e35381f1754e88c9febd5e602666b3c35d6f3a" in namespace "k8s.io": not found Oct 2 19:06:45.077480 kubelet[2204]: E1002 19:06:45.077406 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:45.795767 kubelet[2204]: I1002 19:06:45.795725 2204 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:06:45.796078 kubelet[2204]: E1002 19:06:45.796054 2204 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="3874ac1c-007c-40d9-9459-1c7ff05a2f15" containerName="mount-cgroup" Oct 2 19:06:45.796232 kubelet[2204]: I1002 19:06:45.796211 2204 memory_manager.go:345] "RemoveStaleState removing state" podUID="3874ac1c-007c-40d9-9459-1c7ff05a2f15" containerName="mount-cgroup" Oct 2 19:06:45.796772 kubelet[2204]: I1002 19:06:45.796716 2204 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:06:45.807162 systemd[1]: Created slice kubepods-burstable-podabb233f1_a9fe_4d11_a7c7_7a6ee5d4e8ef.slice. Oct 2 19:06:45.826943 systemd[1]: Created slice kubepods-besteffort-pod766735ec_3310_4447_b638_6ab031198568.slice. Oct 2 19:06:45.914562 kubelet[2204]: I1002 19:06:45.914497 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-host-proc-sys-kernel\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.914855 kubelet[2204]: I1002 19:06:45.914820 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-hubble-tls\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.915124 kubelet[2204]: I1002 19:06:45.915101 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjlqx\" (UniqueName: \"kubernetes.io/projected/766735ec-3310-4447-b638-6ab031198568-kube-api-access-tjlqx\") pod \"cilium-operator-69b677f97c-ljjwq\" (UID: \"766735ec-3310-4447-b638-6ab031198568\") " pod="kube-system/cilium-operator-69b677f97c-ljjwq" Oct 2 19:06:45.915337 kubelet[2204]: I1002 19:06:45.915297 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-hostproc\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.915555 kubelet[2204]: I1002 19:06:45.915516 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cni-path\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.915745 kubelet[2204]: I1002 19:06:45.915723 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-clustermesh-secrets\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.915949 kubelet[2204]: I1002 19:06:45.915928 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-lib-modules\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.916124 kubelet[2204]: I1002 19:06:45.916103 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-xtables-lock\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.916298 kubelet[2204]: I1002 19:06:45.916278 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-config-path\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.916474 kubelet[2204]: I1002 19:06:45.916453 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-host-proc-sys-net\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.916629 kubelet[2204]: I1002 19:06:45.916608 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-bpf-maps\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.916811 kubelet[2204]: I1002 19:06:45.916791 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-cgroup\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.916982 kubelet[2204]: I1002 19:06:45.916962 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-ipsec-secrets\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.917137 kubelet[2204]: I1002 19:06:45.917116 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/766735ec-3310-4447-b638-6ab031198568-cilium-config-path\") pod \"cilium-operator-69b677f97c-ljjwq\" (UID: \"766735ec-3310-4447-b638-6ab031198568\") " pod="kube-system/cilium-operator-69b677f97c-ljjwq" Oct 2 19:06:45.917316 kubelet[2204]: I1002 19:06:45.917294 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-run\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.917484 kubelet[2204]: I1002 19:06:45.917464 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-etc-cni-netd\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:45.917648 kubelet[2204]: I1002 19:06:45.917628 2204 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tcqv\" (UniqueName: \"kubernetes.io/projected/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-kube-api-access-7tcqv\") pod \"cilium-zbbsj\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " pod="kube-system/cilium-zbbsj" Oct 2 19:06:46.078779 kubelet[2204]: E1002 19:06:46.077745 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:46.125439 env[1745]: time="2023-10-02T19:06:46.124790098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zbbsj,Uid:abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef,Namespace:kube-system,Attempt:0,}" Oct 2 19:06:46.134166 env[1745]: time="2023-10-02T19:06:46.134104441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-ljjwq,Uid:766735ec-3310-4447-b638-6ab031198568,Namespace:kube-system,Attempt:0,}" Oct 2 19:06:46.161393 env[1745]: time="2023-10-02T19:06:46.161214322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:06:46.161393 env[1745]: time="2023-10-02T19:06:46.161366681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:06:46.161637 env[1745]: time="2023-10-02T19:06:46.161430536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:06:46.162105 env[1745]: time="2023-10-02T19:06:46.161764439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f pid=3080 runtime=io.containerd.runc.v2 Oct 2 19:06:46.183646 env[1745]: time="2023-10-02T19:06:46.183250000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:06:46.184052 env[1745]: time="2023-10-02T19:06:46.183955068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:06:46.184305 env[1745]: time="2023-10-02T19:06:46.184220508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:06:46.185583 env[1745]: time="2023-10-02T19:06:46.185464665Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5 pid=3097 runtime=io.containerd.runc.v2 Oct 2 19:06:46.197043 systemd[1]: Started cri-containerd-9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f.scope. Oct 2 19:06:46.227732 systemd[1]: Started cri-containerd-40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5.scope. Oct 2 19:06:46.258464 kernel: kauditd_printk_skb: 51 callbacks suppressed Oct 2 19:06:46.258627 kernel: audit: type=1400 audit(1696273606.247:755): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.247000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.247000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.268592 kernel: audit: type=1400 audit(1696273606.247:756): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.248000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.293136 kernel: audit: type=1400 audit(1696273606.248:757): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.293714 kernel: audit: type=1400 audit(1696273606.248:758): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.248000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.248000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.302274 kernel: audit: type=1400 audit(1696273606.248:759): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.302370 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:06:46.248000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.311957 kernel: audit: type=1400 audit(1696273606.248:760): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.312061 kernel: audit: audit_lost=4 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:06:46.248000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.321960 kernel: audit: type=1400 audit(1696273606.248:761): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.322060 kernel: audit: backlog limit exceeded Oct 2 19:06:46.248000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.248000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.250000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.250000 audit: BPF prog-id=91 op=LOAD Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=3080 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:46.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961373631333666306330373734306363333634356262303162633737 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=3080 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:46.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961373631333666306330373734306363333634356262303162633737 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit: BPF prog-id=92 op=LOAD Oct 2 19:06:46.258000 audit[3094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=3080 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:46.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961373631333666306330373734306363333634356262303162633737 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.258000 audit: BPF prog-id=93 op=LOAD Oct 2 19:06:46.258000 audit[3094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=3080 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:46.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961373631333666306330373734306363333634356262303162633737 Oct 2 19:06:46.258000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:06:46.258000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:06:46.259000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.259000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.259000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.259000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.259000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.259000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.259000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.259000 audit[3094]: AVC avc: denied { perfmon } for pid=3094 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.259000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.259000 audit[3094]: AVC avc: denied { bpf } for pid=3094 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.259000 audit: BPF prog-id=94 op=LOAD Oct 2 19:06:46.259000 audit[3094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=3080 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:46.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961373631333666306330373734306363333634356262303162633737 Oct 2 19:06:46.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.326000 audit: BPF prog-id=95 op=LOAD Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:46.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353737646639396235373865393131626235346331316563666466 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:46.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353737646639396235373865393131626235346331316563666466 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.333000 audit: BPF prog-id=96 op=LOAD Oct 2 19:06:46.333000 audit[3114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:46.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353737646639396235373865393131626235346331316563666466 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit: BPF prog-id=97 op=LOAD Oct 2 19:06:46.334000 audit[3114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:46.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353737646639396235373865393131626235346331316563666466 Oct 2 19:06:46.334000 audit: BPF prog-id=97 op=UNLOAD Oct 2 19:06:46.334000 audit: BPF prog-id=96 op=UNLOAD Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { perfmon } for pid=3114 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit[3114]: AVC avc: denied { bpf } for pid=3114 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:46.334000 audit: BPF prog-id=98 op=LOAD Oct 2 19:06:46.334000 audit[3114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=3097 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:46.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353737646639396235373865393131626235346331316563666466 Oct 2 19:06:46.360291 env[1745]: time="2023-10-02T19:06:46.360229477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zbbsj,Uid:abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\"" Oct 2 19:06:46.366299 env[1745]: time="2023-10-02T19:06:46.366219106Z" level=info msg="CreateContainer within sandbox \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:06:46.390428 env[1745]: time="2023-10-02T19:06:46.390351759Z" level=info msg="CreateContainer within sandbox \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\"" Oct 2 19:06:46.391565 env[1745]: time="2023-10-02T19:06:46.391517492Z" level=info msg="StartContainer for \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\"" Oct 2 19:06:46.409038 env[1745]: time="2023-10-02T19:06:46.408978106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-ljjwq,Uid:766735ec-3310-4447-b638-6ab031198568,Namespace:kube-system,Attempt:0,} returns sandbox id \"40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5\"" Oct 2 19:06:46.411882 env[1745]: time="2023-10-02T19:06:46.411819583Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:06:46.440175 systemd[1]: Started cri-containerd-96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843.scope. Oct 2 19:06:46.481202 systemd[1]: cri-containerd-96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843.scope: Deactivated successfully. Oct 2 19:06:46.510452 env[1745]: time="2023-10-02T19:06:46.510384086Z" level=info msg="shim disconnected" id=96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843 Oct 2 19:06:46.510826 env[1745]: time="2023-10-02T19:06:46.510781040Z" level=warning msg="cleaning up after shim disconnected" id=96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843 namespace=k8s.io Oct 2 19:06:46.510973 env[1745]: time="2023-10-02T19:06:46.510945124Z" level=info msg="cleaning up dead shim" Oct 2 19:06:46.537433 env[1745]: time="2023-10-02T19:06:46.537357341Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:06:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3177 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:06:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:06:46.538160 env[1745]: time="2023-10-02T19:06:46.538082054Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 2 19:06:46.543100 env[1745]: time="2023-10-02T19:06:46.538596181Z" level=error msg="Failed to pipe stdout of container \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\"" error="reading from a closed fifo" Oct 2 19:06:46.543816 env[1745]: time="2023-10-02T19:06:46.543010930Z" level=error msg="Failed to pipe stderr of container \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\"" error="reading from a closed fifo" Oct 2 19:06:46.545699 env[1745]: time="2023-10-02T19:06:46.545610188Z" level=error msg="StartContainer for \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:06:46.546464 kubelet[2204]: E1002 19:06:46.546419 2204 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843" Oct 2 19:06:46.546662 kubelet[2204]: E1002 19:06:46.546631 2204 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:06:46.546662 kubelet[2204]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:06:46.546662 kubelet[2204]: rm /hostbin/cilium-mount Oct 2 19:06:46.546662 kubelet[2204]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7tcqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:06:46.547205 kubelet[2204]: E1002 19:06:46.547169 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zbbsj" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef Oct 2 19:06:46.794927 env[1745]: time="2023-10-02T19:06:46.794841243Z" level=info msg="CreateContainer within sandbox \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:06:46.826777 env[1745]: time="2023-10-02T19:06:46.826689563Z" level=info msg="CreateContainer within sandbox \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\"" Oct 2 19:06:46.828201 env[1745]: time="2023-10-02T19:06:46.828151746Z" level=info msg="StartContainer for \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\"" Oct 2 19:06:46.873159 systemd[1]: Started cri-containerd-10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16.scope. Oct 2 19:06:46.915552 systemd[1]: cri-containerd-10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16.scope: Deactivated successfully. Oct 2 19:06:46.936176 env[1745]: time="2023-10-02T19:06:46.936079266Z" level=info msg="shim disconnected" id=10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16 Oct 2 19:06:46.936524 env[1745]: time="2023-10-02T19:06:46.936490189Z" level=warning msg="cleaning up after shim disconnected" id=10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16 namespace=k8s.io Oct 2 19:06:46.936675 env[1745]: time="2023-10-02T19:06:46.936647756Z" level=info msg="cleaning up dead shim" Oct 2 19:06:46.963407 env[1745]: time="2023-10-02T19:06:46.963326625Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:06:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3215 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:06:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:06:46.966980 env[1745]: time="2023-10-02T19:06:46.964081136Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Oct 2 19:06:46.967230 env[1745]: time="2023-10-02T19:06:46.964512027Z" level=error msg="Failed to pipe stderr of container \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\"" error="reading from a closed fifo" Oct 2 19:06:46.967387 env[1745]: time="2023-10-02T19:06:46.966979255Z" level=error msg="Failed to pipe stdout of container \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\"" error="reading from a closed fifo" Oct 2 19:06:46.969739 env[1745]: time="2023-10-02T19:06:46.969650673Z" level=error msg="StartContainer for \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:06:46.970505 kubelet[2204]: E1002 19:06:46.970238 2204 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16" Oct 2 19:06:46.970505 kubelet[2204]: E1002 19:06:46.970391 2204 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:06:46.970505 kubelet[2204]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:06:46.970505 kubelet[2204]: rm /hostbin/cilium-mount Oct 2 19:06:46.970920 kubelet[2204]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7tcqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:06:46.971053 kubelet[2204]: E1002 19:06:46.970451 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zbbsj" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef Oct 2 19:06:47.080518 kubelet[2204]: E1002 19:06:47.080376 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:47.088496 kubelet[2204]: E1002 19:06:47.088449 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:47.796746 kubelet[2204]: I1002 19:06:47.795984 2204 scope.go:115] "RemoveContainer" containerID="96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843" Oct 2 19:06:47.796746 kubelet[2204]: I1002 19:06:47.796526 2204 scope.go:115] "RemoveContainer" containerID="96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843" Oct 2 19:06:47.798813 env[1745]: time="2023-10-02T19:06:47.798695433Z" level=info msg="RemoveContainer for \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\"" Oct 2 19:06:47.799717 env[1745]: time="2023-10-02T19:06:47.799654025Z" level=info msg="RemoveContainer for \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\"" Oct 2 19:06:47.801711 env[1745]: time="2023-10-02T19:06:47.801641411Z" level=error msg="RemoveContainer for \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\" failed" error="failed to set removing state for container \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\": container is already in removing state" Oct 2 19:06:47.802249 kubelet[2204]: E1002 19:06:47.802219 2204 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\": container is already in removing state" containerID="96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843" Oct 2 19:06:47.803127 kubelet[2204]: E1002 19:06:47.802426 2204 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843": container is already in removing state; Skipping pod "cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef)" Oct 2 19:06:47.803127 kubelet[2204]: E1002 19:06:47.802841 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef)\"" pod="kube-system/cilium-zbbsj" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef Oct 2 19:06:47.804973 env[1745]: time="2023-10-02T19:06:47.804802463Z" level=info msg="RemoveContainer for \"96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843\" returns successfully" Oct 2 19:06:47.849300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506201140.mount: Deactivated successfully. Oct 2 19:06:48.081147 kubelet[2204]: E1002 19:06:48.080979 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:48.800017 kubelet[2204]: E1002 19:06:48.799979 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef)\"" pod="kube-system/cilium-zbbsj" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef Oct 2 19:06:48.855276 env[1745]: time="2023-10-02T19:06:48.855194878Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:06:48.858695 env[1745]: time="2023-10-02T19:06:48.858628271Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:06:48.861878 env[1745]: time="2023-10-02T19:06:48.861826596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:06:48.862948 env[1745]: time="2023-10-02T19:06:48.862871064Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f\"" Oct 2 19:06:48.867160 env[1745]: time="2023-10-02T19:06:48.867083328Z" level=info msg="CreateContainer within sandbox \"40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:06:48.886969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771132744.mount: Deactivated successfully. Oct 2 19:06:48.898289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3064204496.mount: Deactivated successfully. Oct 2 19:06:48.904452 env[1745]: time="2023-10-02T19:06:48.904390918Z" level=info msg="CreateContainer within sandbox \"40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\"" Oct 2 19:06:48.905643 env[1745]: time="2023-10-02T19:06:48.905592809Z" level=info msg="StartContainer for \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\"" Oct 2 19:06:48.954937 systemd[1]: Started cri-containerd-43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad.scope. Oct 2 19:06:48.997000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.997000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.997000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.997000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.997000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.997000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.997000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.997000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.997000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit: BPF prog-id=99 op=LOAD Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=3097 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:48.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433333839626535363765373737316364333165613134613338386361 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=3097 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:48.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433333839626535363765373737316364333165613134613338386361 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.998000 audit: BPF prog-id=100 op=LOAD Oct 2 19:06:48.998000 audit[3235]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=3097 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:48.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433333839626535363765373737316364333165613134613338386361 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit: BPF prog-id=101 op=LOAD Oct 2 19:06:48.999000 audit[3235]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=3097 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:48.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433333839626535363765373737316364333165613134613338386361 Oct 2 19:06:48.999000 audit: BPF prog-id=101 op=UNLOAD Oct 2 19:06:48.999000 audit: BPF prog-id=100 op=UNLOAD Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { perfmon } for pid=3235 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit[3235]: AVC avc: denied { bpf } for pid=3235 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:06:48.999000 audit: BPF prog-id=102 op=LOAD Oct 2 19:06:48.999000 audit[3235]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=3097 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:06:48.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433333839626535363765373737316364333165613134613338386361 Oct 2 19:06:49.032369 env[1745]: time="2023-10-02T19:06:49.032307850Z" level=info msg="StartContainer for \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\" returns successfully" Oct 2 19:06:49.081363 kubelet[2204]: E1002 19:06:49.081183 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:49.111000 audit[3246]: AVC avc: denied { map_create } for pid=3246 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c158,c297 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c158,c297 tclass=bpf permissive=0 Oct 2 19:06:49.111000 audit[3246]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=4000651768 a2=48 a3=0 items=0 ppid=3097 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c158,c297 key=(null) Oct 2 19:06:49.111000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:06:49.617202 kubelet[2204]: W1002 19:06:49.617135 2204 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb233f1_a9fe_4d11_a7c7_7a6ee5d4e8ef.slice/cri-containerd-96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843.scope WatchSource:0}: container "96e0292f67db2c83d9069a215989a28fe048d72851480bb8e7483f369c21a843" in namespace "k8s.io": not found Oct 2 19:06:50.081930 kubelet[2204]: E1002 19:06:50.081856 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:51.082954 kubelet[2204]: E1002 19:06:51.082852 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:51.894443 kubelet[2204]: E1002 19:06:51.894398 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:52.083599 kubelet[2204]: E1002 19:06:52.083565 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:52.089576 kubelet[2204]: E1002 19:06:52.089537 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:52.726830 kubelet[2204]: W1002 19:06:52.726765 2204 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb233f1_a9fe_4d11_a7c7_7a6ee5d4e8ef.slice/cri-containerd-10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16.scope WatchSource:0}: task 10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16 not found: not found Oct 2 19:06:53.085141 kubelet[2204]: E1002 19:06:53.085100 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:54.086071 kubelet[2204]: E1002 19:06:54.086009 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:55.086356 kubelet[2204]: E1002 19:06:55.086306 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:56.087408 kubelet[2204]: E1002 19:06:56.087344 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:57.088246 kubelet[2204]: E1002 19:06:57.088201 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:57.091112 kubelet[2204]: E1002 19:06:57.091061 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:06:58.089398 kubelet[2204]: E1002 19:06:58.089357 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:06:59.090330 kubelet[2204]: E1002 19:06:59.090273 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:00.091449 kubelet[2204]: E1002 19:07:00.091386 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:01.092254 kubelet[2204]: E1002 19:07:01.092211 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:02.092891 kubelet[2204]: E1002 19:07:02.092815 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:07:02.093483 kubelet[2204]: E1002 19:07:02.092927 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:02.321257 env[1745]: time="2023-10-02T19:07:02.321161674Z" level=info msg="CreateContainer within sandbox \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:07:02.342393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1074775327.mount: Deactivated successfully. Oct 2 19:07:02.353297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342867562.mount: Deactivated successfully. Oct 2 19:07:02.358164 env[1745]: time="2023-10-02T19:07:02.358068682Z" level=info msg="CreateContainer within sandbox \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\"" Oct 2 19:07:02.359224 env[1745]: time="2023-10-02T19:07:02.359176423Z" level=info msg="StartContainer for \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\"" Oct 2 19:07:02.409054 systemd[1]: Started cri-containerd-9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f.scope. Oct 2 19:07:02.448763 systemd[1]: cri-containerd-9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f.scope: Deactivated successfully. Oct 2 19:07:02.680826 env[1745]: time="2023-10-02T19:07:02.680149025Z" level=info msg="shim disconnected" id=9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f Oct 2 19:07:02.681092 env[1745]: time="2023-10-02T19:07:02.680949553Z" level=warning msg="cleaning up after shim disconnected" id=9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f namespace=k8s.io Oct 2 19:07:02.681092 env[1745]: time="2023-10-02T19:07:02.680982000Z" level=info msg="cleaning up dead shim" Oct 2 19:07:02.708050 env[1745]: time="2023-10-02T19:07:02.707922842Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:07:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3288 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:07:02Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:07:02.708524 env[1745]: time="2023-10-02T19:07:02.708423020Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:07:02.708922 env[1745]: time="2023-10-02T19:07:02.708850181Z" level=error msg="Failed to pipe stderr of container \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\"" error="reading from a closed fifo" Oct 2 19:07:02.709108 env[1745]: time="2023-10-02T19:07:02.708955286Z" level=error msg="Failed to pipe stdout of container \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\"" error="reading from a closed fifo" Oct 2 19:07:02.711232 env[1745]: time="2023-10-02T19:07:02.711142185Z" level=error msg="StartContainer for \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:07:02.711785 kubelet[2204]: E1002 19:07:02.711484 2204 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f" Oct 2 19:07:02.711785 kubelet[2204]: E1002 19:07:02.711686 2204 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:07:02.711785 kubelet[2204]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:07:02.711785 kubelet[2204]: rm /hostbin/cilium-mount Oct 2 19:07:02.712251 kubelet[2204]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7tcqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:07:02.712373 kubelet[2204]: E1002 19:07:02.711747 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zbbsj" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef Oct 2 19:07:02.831931 kubelet[2204]: I1002 19:07:02.831766 2204 scope.go:115] "RemoveContainer" containerID="10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16" Oct 2 19:07:02.832460 kubelet[2204]: I1002 19:07:02.832420 2204 scope.go:115] "RemoveContainer" containerID="10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16" Oct 2 19:07:02.835237 env[1745]: time="2023-10-02T19:07:02.835189486Z" level=info msg="RemoveContainer for \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\"" Oct 2 19:07:02.837194 env[1745]: time="2023-10-02T19:07:02.835201185Z" level=info msg="RemoveContainer for \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\"" Oct 2 19:07:02.837706 env[1745]: time="2023-10-02T19:07:02.837645667Z" level=error msg="RemoveContainer for \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\" failed" error="failed to set removing state for container \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\": container is already in removing state" Oct 2 19:07:02.838249 kubelet[2204]: E1002 19:07:02.838213 2204 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\": container is already in removing state" containerID="10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16" Oct 2 19:07:02.838801 kubelet[2204]: I1002 19:07:02.838279 2204 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16} err="rpc error: code = Unknown desc = failed to set removing state for container \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\": container is already in removing state" Oct 2 19:07:02.842317 env[1745]: time="2023-10-02T19:07:02.842262313Z" level=info msg="RemoveContainer for \"10ff56d42b50537524352e87a6f8580d5dd81e0412de777b2b99394197812d16\" returns successfully" Oct 2 19:07:02.843315 kubelet[2204]: E1002 19:07:02.843278 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef)\"" pod="kube-system/cilium-zbbsj" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef Oct 2 19:07:03.093862 kubelet[2204]: E1002 19:07:03.093809 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:03.336815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f-rootfs.mount: Deactivated successfully. Oct 2 19:07:04.095229 kubelet[2204]: E1002 19:07:04.095166 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:05.096329 kubelet[2204]: E1002 19:07:05.096258 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:05.788028 kubelet[2204]: W1002 19:07:05.786873 2204 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb233f1_a9fe_4d11_a7c7_7a6ee5d4e8ef.slice/cri-containerd-9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f.scope WatchSource:0}: task 9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f not found: not found Oct 2 19:07:06.097523 kubelet[2204]: E1002 19:07:06.097394 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:07.093756 kubelet[2204]: E1002 19:07:07.093693 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:07:07.099011 kubelet[2204]: E1002 19:07:07.098967 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:08.099664 kubelet[2204]: E1002 19:07:08.099578 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:09.100787 kubelet[2204]: E1002 19:07:09.100725 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:10.101057 kubelet[2204]: E1002 19:07:10.100992 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:11.101639 kubelet[2204]: E1002 19:07:11.101600 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:11.894800 kubelet[2204]: E1002 19:07:11.894736 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:11.941680 env[1745]: time="2023-10-02T19:07:11.941604531Z" level=info msg="StopPodSandbox for \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\"" Oct 2 19:07:11.942531 env[1745]: time="2023-10-02T19:07:11.942435750Z" level=info msg="TearDown network for sandbox \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\" successfully" Oct 2 19:07:11.942679 env[1745]: time="2023-10-02T19:07:11.942647064Z" level=info msg="StopPodSandbox for \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\" returns successfully" Oct 2 19:07:11.943734 env[1745]: time="2023-10-02T19:07:11.943663175Z" level=info msg="RemovePodSandbox for \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\"" Oct 2 19:07:11.944059 env[1745]: time="2023-10-02T19:07:11.943981646Z" level=info msg="Forcibly stopping sandbox \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\"" Oct 2 19:07:11.944324 env[1745]: time="2023-10-02T19:07:11.944288251Z" level=info msg="TearDown network for sandbox \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\" successfully" Oct 2 19:07:11.949129 env[1745]: time="2023-10-02T19:07:11.949074184Z" level=info msg="RemovePodSandbox \"8932e5aa35004d696c01983d45b5c9f947ae3d2955997574c83353843d1f5ad0\" returns successfully" Oct 2 19:07:11.950026 env[1745]: time="2023-10-02T19:07:11.949979009Z" level=info msg="StopPodSandbox for \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\"" Oct 2 19:07:11.950344 env[1745]: time="2023-10-02T19:07:11.950279362Z" level=info msg="TearDown network for sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" successfully" Oct 2 19:07:11.950473 env[1745]: time="2023-10-02T19:07:11.950440614Z" level=info msg="StopPodSandbox for \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" returns successfully" Oct 2 19:07:11.951255 env[1745]: time="2023-10-02T19:07:11.951169379Z" level=info msg="RemovePodSandbox for \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\"" Oct 2 19:07:11.951552 env[1745]: time="2023-10-02T19:07:11.951472875Z" level=info msg="Forcibly stopping sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\"" Oct 2 19:07:11.952004 env[1745]: time="2023-10-02T19:07:11.951895588Z" level=info msg="TearDown network for sandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" successfully" Oct 2 19:07:11.956737 env[1745]: time="2023-10-02T19:07:11.956656407Z" level=info msg="RemovePodSandbox \"7344f400ad6d849512c6cbe5c4ce034f4a6c716762f54b9111dba042e04c57ef\" returns successfully" Oct 2 19:07:12.095192 kubelet[2204]: E1002 19:07:12.095153 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:07:12.103378 kubelet[2204]: E1002 19:07:12.103333 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:13.103700 kubelet[2204]: E1002 19:07:13.103638 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:14.104510 kubelet[2204]: E1002 19:07:14.104448 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:15.104894 kubelet[2204]: E1002 19:07:15.104851 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:16.106362 kubelet[2204]: E1002 19:07:16.106316 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:17.096290 kubelet[2204]: E1002 19:07:17.096258 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:07:17.108123 kubelet[2204]: E1002 19:07:17.108067 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:17.315649 kubelet[2204]: E1002 19:07:17.315611 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef)\"" pod="kube-system/cilium-zbbsj" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef Oct 2 19:07:18.108921 kubelet[2204]: E1002 19:07:18.108856 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:19.109916 kubelet[2204]: E1002 19:07:19.109850 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:20.110089 kubelet[2204]: E1002 19:07:20.110027 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:21.110895 kubelet[2204]: E1002 19:07:21.110842 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:22.098024 kubelet[2204]: E1002 19:07:22.097926 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:07:22.112627 kubelet[2204]: E1002 19:07:22.112598 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:23.113376 kubelet[2204]: E1002 19:07:23.113300 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:24.114020 kubelet[2204]: E1002 19:07:24.113979 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:25.114767 kubelet[2204]: E1002 19:07:25.114725 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:26.115633 kubelet[2204]: E1002 19:07:26.115564 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:27.099754 kubelet[2204]: E1002 19:07:27.099722 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:07:27.116352 kubelet[2204]: E1002 19:07:27.116293 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:28.117501 kubelet[2204]: E1002 19:07:28.117435 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:29.118256 kubelet[2204]: E1002 19:07:29.118213 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:29.319049 env[1745]: time="2023-10-02T19:07:29.318981298Z" level=info msg="CreateContainer within sandbox \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:07:29.336338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount669811340.mount: Deactivated successfully. Oct 2 19:07:29.346085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906860413.mount: Deactivated successfully. Oct 2 19:07:29.356090 env[1745]: time="2023-10-02T19:07:29.355998109Z" level=info msg="CreateContainer within sandbox \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871\"" Oct 2 19:07:29.357038 env[1745]: time="2023-10-02T19:07:29.356972199Z" level=info msg="StartContainer for \"8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871\"" Oct 2 19:07:29.407109 systemd[1]: Started cri-containerd-8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871.scope. Oct 2 19:07:29.446051 systemd[1]: cri-containerd-8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871.scope: Deactivated successfully. Oct 2 19:07:29.468013 env[1745]: time="2023-10-02T19:07:29.467932777Z" level=info msg="shim disconnected" id=8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871 Oct 2 19:07:29.468287 env[1745]: time="2023-10-02T19:07:29.468014064Z" level=warning msg="cleaning up after shim disconnected" id=8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871 namespace=k8s.io Oct 2 19:07:29.468287 env[1745]: time="2023-10-02T19:07:29.468036755Z" level=info msg="cleaning up dead shim" Oct 2 19:07:29.496143 env[1745]: time="2023-10-02T19:07:29.496071212Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:07:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3333 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:07:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:07:29.496618 env[1745]: time="2023-10-02T19:07:29.496516623Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:07:29.497070 env[1745]: time="2023-10-02T19:07:29.497007323Z" level=error msg="Failed to pipe stderr of container \"8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871\"" error="reading from a closed fifo" Oct 2 19:07:29.501116 env[1745]: time="2023-10-02T19:07:29.501049290Z" level=error msg="Failed to pipe stdout of container \"8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871\"" error="reading from a closed fifo" Oct 2 19:07:29.503574 env[1745]: time="2023-10-02T19:07:29.503505138Z" level=error msg="StartContainer for \"8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:07:29.504037 kubelet[2204]: E1002 19:07:29.504002 2204 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871" Oct 2 19:07:29.504222 kubelet[2204]: E1002 19:07:29.504144 2204 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:07:29.504222 kubelet[2204]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:07:29.504222 kubelet[2204]: rm /hostbin/cilium-mount Oct 2 19:07:29.504222 kubelet[2204]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7tcqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:07:29.504523 kubelet[2204]: E1002 19:07:29.504209 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zbbsj" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef Oct 2 19:07:29.896763 kubelet[2204]: I1002 19:07:29.896730 2204 scope.go:115] "RemoveContainer" containerID="9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f" Oct 2 19:07:29.897378 kubelet[2204]: I1002 19:07:29.897341 2204 scope.go:115] "RemoveContainer" containerID="9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f" Oct 2 19:07:29.899602 env[1745]: time="2023-10-02T19:07:29.899539168Z" level=info msg="RemoveContainer for \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\"" Oct 2 19:07:29.900632 env[1745]: time="2023-10-02T19:07:29.900573773Z" level=info msg="RemoveContainer for \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\"" Oct 2 19:07:29.901070 env[1745]: time="2023-10-02T19:07:29.900980893Z" level=error msg="RemoveContainer for \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\" failed" error="failed to set removing state for container \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\": container is already in removing state" Oct 2 19:07:29.902619 kubelet[2204]: E1002 19:07:29.902571 2204 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\": container is already in removing state" containerID="9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f" Oct 2 19:07:29.902806 kubelet[2204]: E1002 19:07:29.902663 2204 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f": container is already in removing state; Skipping pod "cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef)" Oct 2 19:07:29.903149 kubelet[2204]: E1002 19:07:29.903119 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef)\"" pod="kube-system/cilium-zbbsj" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef Oct 2 19:07:29.906982 env[1745]: time="2023-10-02T19:07:29.906893810Z" level=info msg="RemoveContainer for \"9153432ec78f982a89e76b41f98520fd0f12444f7f0df01ecbaabe528576600f\" returns successfully" Oct 2 19:07:30.119894 kubelet[2204]: E1002 19:07:30.119822 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:30.331791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871-rootfs.mount: Deactivated successfully. Oct 2 19:07:31.120469 kubelet[2204]: E1002 19:07:31.120372 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:31.893974 kubelet[2204]: E1002 19:07:31.893886 2204 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:32.101658 kubelet[2204]: E1002 19:07:32.101595 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:07:32.120972 kubelet[2204]: E1002 19:07:32.120921 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:32.574676 kubelet[2204]: W1002 19:07:32.574619 2204 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb233f1_a9fe_4d11_a7c7_7a6ee5d4e8ef.slice/cri-containerd-8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871.scope WatchSource:0}: task 8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871 not found: not found Oct 2 19:07:33.121815 kubelet[2204]: E1002 19:07:33.121751 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:34.122434 kubelet[2204]: E1002 19:07:34.122392 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:35.123407 kubelet[2204]: E1002 19:07:35.123351 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:36.123858 kubelet[2204]: E1002 19:07:36.123812 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:37.102918 kubelet[2204]: E1002 19:07:37.102857 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:07:37.125352 kubelet[2204]: E1002 19:07:37.125301 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:38.126129 kubelet[2204]: E1002 19:07:38.126070 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:39.127119 kubelet[2204]: E1002 19:07:39.127053 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:40.127921 kubelet[2204]: E1002 19:07:40.127847 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:40.315780 kubelet[2204]: E1002 19:07:40.315725 2204 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-zbbsj_kube-system(abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef)\"" pod="kube-system/cilium-zbbsj" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef Oct 2 19:07:41.128999 kubelet[2204]: E1002 19:07:41.128939 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:42.104206 kubelet[2204]: E1002 19:07:42.104151 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:07:42.130111 kubelet[2204]: E1002 19:07:42.130051 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:43.130587 kubelet[2204]: E1002 19:07:43.130540 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:44.131488 kubelet[2204]: E1002 19:07:44.131426 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:45.132031 kubelet[2204]: E1002 19:07:45.131966 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:46.132369 kubelet[2204]: E1002 19:07:46.132325 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:47.105653 kubelet[2204]: E1002 19:07:47.105590 2204 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:07:47.133268 kubelet[2204]: E1002 19:07:47.133217 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:47.475529 env[1745]: time="2023-10-02T19:07:47.475378450Z" level=info msg="StopPodSandbox for \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\"" Oct 2 19:07:47.475529 env[1745]: time="2023-10-02T19:07:47.475492858Z" level=info msg="Container to stop \"8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:07:47.479360 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f-shm.mount: Deactivated successfully. Oct 2 19:07:47.497318 systemd[1]: cri-containerd-9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f.scope: Deactivated successfully. Oct 2 19:07:47.502190 kernel: kauditd_printk_skb: 166 callbacks suppressed Oct 2 19:07:47.502373 kernel: audit: type=1334 audit(1696273667.497:809): prog-id=91 op=UNLOAD Oct 2 19:07:47.497000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:07:47.504000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:07:47.509132 kernel: audit: type=1334 audit(1696273667.504:810): prog-id=94 op=UNLOAD Oct 2 19:07:47.541826 env[1745]: time="2023-10-02T19:07:47.541756145Z" level=info msg="StopContainer for \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\" with timeout 30 (s)" Oct 2 19:07:47.542526 env[1745]: time="2023-10-02T19:07:47.542445931Z" level=info msg="Stop container \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\" with signal terminated" Oct 2 19:07:47.557932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f-rootfs.mount: Deactivated successfully. Oct 2 19:07:47.574391 env[1745]: time="2023-10-02T19:07:47.573334956Z" level=info msg="shim disconnected" id=9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f Oct 2 19:07:47.574695 env[1745]: time="2023-10-02T19:07:47.574390802Z" level=warning msg="cleaning up after shim disconnected" id=9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f namespace=k8s.io Oct 2 19:07:47.574695 env[1745]: time="2023-10-02T19:07:47.574422422Z" level=info msg="cleaning up dead shim" Oct 2 19:07:47.596000 audit: BPF prog-id=99 op=UNLOAD Oct 2 19:07:47.596611 systemd[1]: cri-containerd-43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad.scope: Deactivated successfully. Oct 2 19:07:47.601396 kernel: audit: type=1334 audit(1696273667.596:811): prog-id=99 op=UNLOAD Oct 2 19:07:47.601553 kernel: audit: type=1334 audit(1696273667.600:812): prog-id=102 op=UNLOAD Oct 2 19:07:47.600000 audit: BPF prog-id=102 op=UNLOAD Oct 2 19:07:47.619165 env[1745]: time="2023-10-02T19:07:47.619093099Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:07:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3375 runtime=io.containerd.runc.v2\n" Oct 2 19:07:47.619709 env[1745]: time="2023-10-02T19:07:47.619653248Z" level=info msg="TearDown network for sandbox \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\" successfully" Oct 2 19:07:47.619835 env[1745]: time="2023-10-02T19:07:47.619701561Z" level=info msg="StopPodSandbox for \"9a76136f0c07740cc3645bb01bc77763ff0f5a2f12a20e5101ba0f2a220f2d7f\" returns successfully" Oct 2 19:07:47.651271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad-rootfs.mount: Deactivated successfully. Oct 2 19:07:47.666226 env[1745]: time="2023-10-02T19:07:47.666151805Z" level=info msg="shim disconnected" id=43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad Oct 2 19:07:47.666226 env[1745]: time="2023-10-02T19:07:47.666221645Z" level=warning msg="cleaning up after shim disconnected" id=43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad namespace=k8s.io Oct 2 19:07:47.666570 env[1745]: time="2023-10-02T19:07:47.666244061Z" level=info msg="cleaning up dead shim" Oct 2 19:07:47.678143 kubelet[2204]: I1002 19:07:47.676885 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-clustermesh-secrets\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678143 kubelet[2204]: I1002 19:07:47.676976 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-lib-modules\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678143 kubelet[2204]: I1002 19:07:47.677018 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-host-proc-sys-net\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678143 kubelet[2204]: I1002 19:07:47.677063 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-hubble-tls\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678143 kubelet[2204]: I1002 19:07:47.677101 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-bpf-maps\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678143 kubelet[2204]: I1002 19:07:47.677138 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-run\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678700 kubelet[2204]: I1002 19:07:47.677205 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tcqv\" (UniqueName: \"kubernetes.io/projected/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-kube-api-access-7tcqv\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678700 kubelet[2204]: I1002 19:07:47.677249 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cni-path\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678700 kubelet[2204]: I1002 19:07:47.677309 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-ipsec-secrets\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678700 kubelet[2204]: I1002 19:07:47.677350 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-etc-cni-netd\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678700 kubelet[2204]: I1002 19:07:47.677389 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-hostproc\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.678700 kubelet[2204]: I1002 19:07:47.677432 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-xtables-lock\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.679124 kubelet[2204]: I1002 19:07:47.677477 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-config-path\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.679124 kubelet[2204]: I1002 19:07:47.677521 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-host-proc-sys-kernel\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.679124 kubelet[2204]: I1002 19:07:47.677559 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-cgroup\") pod \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\" (UID: \"abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef\") " Oct 2 19:07:47.679124 kubelet[2204]: I1002 19:07:47.677637 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:07:47.679124 kubelet[2204]: I1002 19:07:47.677981 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cni-path" (OuterVolumeSpecName: "cni-path") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:07:47.679445 kubelet[2204]: I1002 19:07:47.678030 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:07:47.679445 kubelet[2204]: I1002 19:07:47.678069 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:07:47.679445 kubelet[2204]: I1002 19:07:47.678402 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:07:47.679445 kubelet[2204]: I1002 19:07:47.678447 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:07:47.679445 kubelet[2204]: I1002 19:07:47.678780 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-hostproc" (OuterVolumeSpecName: "hostproc") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:07:47.679739 kubelet[2204]: I1002 19:07:47.679134 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:07:47.679739 kubelet[2204]: W1002 19:07:47.679348 2204 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:07:47.681459 kubelet[2204]: I1002 19:07:47.680145 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:07:47.681459 kubelet[2204]: I1002 19:07:47.680234 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:07:47.699191 systemd[1]: var-lib-kubelet-pods-abb233f1\x2da9fe\x2d4d11\x2da7c7\x2d7a6ee5d4e8ef-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:07:47.701709 kubelet[2204]: I1002 19:07:47.701657 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:07:47.706501 systemd[1]: var-lib-kubelet-pods-abb233f1\x2da9fe\x2d4d11\x2da7c7\x2d7a6ee5d4e8ef-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:07:47.709811 kubelet[2204]: I1002 19:07:47.709740 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:07:47.715044 kubelet[2204]: I1002 19:07:47.711101 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:07:47.716803 kubelet[2204]: I1002 19:07:47.716755 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:07:47.717093 kubelet[2204]: I1002 19:07:47.716774 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-kube-api-access-7tcqv" (OuterVolumeSpecName: "kube-api-access-7tcqv") pod "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef" (UID: "abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef"). InnerVolumeSpecName "kube-api-access-7tcqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:07:47.721604 env[1745]: time="2023-10-02T19:07:47.721538141Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:07:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3401 runtime=io.containerd.runc.v2\n" Oct 2 19:07:47.729501 env[1745]: time="2023-10-02T19:07:47.726793910Z" level=info msg="StopContainer for \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\" returns successfully" Oct 2 19:07:47.729501 env[1745]: time="2023-10-02T19:07:47.727516924Z" level=info msg="StopPodSandbox for \"40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5\"" Oct 2 19:07:47.729501 env[1745]: time="2023-10-02T19:07:47.727594516Z" level=info msg="Container to stop \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:07:47.746000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:07:47.747216 systemd[1]: cri-containerd-40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5.scope: Deactivated successfully. Oct 2 19:07:47.750953 kernel: audit: type=1334 audit(1696273667.746:813): prog-id=95 op=UNLOAD Oct 2 19:07:47.752000 audit: BPF prog-id=98 op=UNLOAD Oct 2 19:07:47.755943 kernel: audit: type=1334 audit(1696273667.752:814): prog-id=98 op=UNLOAD Oct 2 19:07:47.779055 kubelet[2204]: I1002 19:07:47.778419 2204 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-hostproc\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779055 kubelet[2204]: I1002 19:07:47.778536 2204 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-xtables-lock\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779055 kubelet[2204]: I1002 19:07:47.778570 2204 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-config-path\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779055 kubelet[2204]: I1002 19:07:47.778595 2204 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-etc-cni-netd\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779055 kubelet[2204]: I1002 19:07:47.778648 2204 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-host-proc-sys-kernel\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779055 kubelet[2204]: I1002 19:07:47.778672 2204 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-cgroup\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779055 kubelet[2204]: I1002 19:07:47.778723 2204 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-host-proc-sys-net\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779055 kubelet[2204]: I1002 19:07:47.778748 2204 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-hubble-tls\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779656 kubelet[2204]: I1002 19:07:47.778771 2204 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-clustermesh-secrets\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779656 kubelet[2204]: I1002 19:07:47.778821 2204 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-lib-modules\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779656 kubelet[2204]: I1002 19:07:47.778845 2204 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-bpf-maps\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779656 kubelet[2204]: I1002 19:07:47.778891 2204 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-run\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779656 kubelet[2204]: I1002 19:07:47.778948 2204 reconciler.go:399] "Volume detached for volume \"kube-api-access-7tcqv\" (UniqueName: \"kubernetes.io/projected/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-kube-api-access-7tcqv\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779656 kubelet[2204]: I1002 19:07:47.778973 2204 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cni-path\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.779656 kubelet[2204]: I1002 19:07:47.779019 2204 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef-cilium-ipsec-secrets\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:47.820364 env[1745]: time="2023-10-02T19:07:47.820285605Z" level=info msg="shim disconnected" id=40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5 Oct 2 19:07:47.820364 env[1745]: time="2023-10-02T19:07:47.820358589Z" level=warning msg="cleaning up after shim disconnected" id=40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5 namespace=k8s.io Oct 2 19:07:47.820705 env[1745]: time="2023-10-02T19:07:47.820383369Z" level=info msg="cleaning up dead shim" Oct 2 19:07:47.845555 env[1745]: time="2023-10-02T19:07:47.845490988Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:07:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3440 runtime=io.containerd.runc.v2\n" Oct 2 19:07:47.846165 env[1745]: time="2023-10-02T19:07:47.846112529Z" level=info msg="TearDown network for sandbox \"40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5\" successfully" Oct 2 19:07:47.846319 env[1745]: time="2023-10-02T19:07:47.846164597Z" level=info msg="StopPodSandbox for \"40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5\" returns successfully" Oct 2 19:07:47.937934 kubelet[2204]: I1002 19:07:47.937700 2204 scope.go:115] "RemoveContainer" containerID="43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad" Oct 2 19:07:47.947419 env[1745]: time="2023-10-02T19:07:47.947229577Z" level=info msg="RemoveContainer for \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\"" Oct 2 19:07:47.950324 systemd[1]: Removed slice kubepods-burstable-podabb233f1_a9fe_4d11_a7c7_7a6ee5d4e8ef.slice. Oct 2 19:07:47.956089 env[1745]: time="2023-10-02T19:07:47.956000512Z" level=info msg="RemoveContainer for \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\" returns successfully" Oct 2 19:07:47.958533 kubelet[2204]: I1002 19:07:47.958464 2204 scope.go:115] "RemoveContainer" containerID="43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad" Oct 2 19:07:47.959328 env[1745]: time="2023-10-02T19:07:47.959117841Z" level=error msg="ContainerStatus for \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\": not found" Oct 2 19:07:47.959472 kubelet[2204]: E1002 19:07:47.959454 2204 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\": not found" containerID="43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad" Oct 2 19:07:47.959582 kubelet[2204]: I1002 19:07:47.959511 2204 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad} err="failed to get container status \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"43389be567e7771cd31ea14a388ca4fd7627c804e7117354c0e0313a1a3154ad\": not found" Oct 2 19:07:47.959582 kubelet[2204]: I1002 19:07:47.959537 2204 scope.go:115] "RemoveContainer" containerID="8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871" Oct 2 19:07:47.967705 env[1745]: time="2023-10-02T19:07:47.967651788Z" level=info msg="RemoveContainer for \"8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871\"" Oct 2 19:07:47.972843 env[1745]: time="2023-10-02T19:07:47.972767997Z" level=info msg="RemoveContainer for \"8d701a5792fb7a501aa770c64b23f4b1c65edab9e02628e46d4dd85769286871\" returns successfully" Oct 2 19:07:47.979428 kubelet[2204]: I1002 19:07:47.979374 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjlqx\" (UniqueName: \"kubernetes.io/projected/766735ec-3310-4447-b638-6ab031198568-kube-api-access-tjlqx\") pod \"766735ec-3310-4447-b638-6ab031198568\" (UID: \"766735ec-3310-4447-b638-6ab031198568\") " Oct 2 19:07:47.981607 kubelet[2204]: I1002 19:07:47.979449 2204 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/766735ec-3310-4447-b638-6ab031198568-cilium-config-path\") pod \"766735ec-3310-4447-b638-6ab031198568\" (UID: \"766735ec-3310-4447-b638-6ab031198568\") " Oct 2 19:07:47.981607 kubelet[2204]: W1002 19:07:47.979750 2204 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/766735ec-3310-4447-b638-6ab031198568/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:07:47.984594 kubelet[2204]: I1002 19:07:47.984528 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766735ec-3310-4447-b638-6ab031198568-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "766735ec-3310-4447-b638-6ab031198568" (UID: "766735ec-3310-4447-b638-6ab031198568"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:07:47.993357 kubelet[2204]: I1002 19:07:47.993303 2204 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766735ec-3310-4447-b638-6ab031198568-kube-api-access-tjlqx" (OuterVolumeSpecName: "kube-api-access-tjlqx") pod "766735ec-3310-4447-b638-6ab031198568" (UID: "766735ec-3310-4447-b638-6ab031198568"). InnerVolumeSpecName "kube-api-access-tjlqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:07:48.079878 kubelet[2204]: I1002 19:07:48.079832 2204 reconciler.go:399] "Volume detached for volume \"kube-api-access-tjlqx\" (UniqueName: \"kubernetes.io/projected/766735ec-3310-4447-b638-6ab031198568-kube-api-access-tjlqx\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:48.080140 kubelet[2204]: I1002 19:07:48.080117 2204 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/766735ec-3310-4447-b638-6ab031198568-cilium-config-path\") on node \"172.31.18.218\" DevicePath \"\"" Oct 2 19:07:48.134082 kubelet[2204]: E1002 19:07:48.134023 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:48.245454 systemd[1]: Removed slice kubepods-besteffort-pod766735ec_3310_4447_b638_6ab031198568.slice. Oct 2 19:07:48.320801 kubelet[2204]: I1002 19:07:48.320762 2204 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=766735ec-3310-4447-b638-6ab031198568 path="/var/lib/kubelet/pods/766735ec-3310-4447-b638-6ab031198568/volumes" Oct 2 19:07:48.322206 kubelet[2204]: I1002 19:07:48.322164 2204 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef path="/var/lib/kubelet/pods/abb233f1-a9fe-4d11-a7c7-7a6ee5d4e8ef/volumes" Oct 2 19:07:48.478971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5-rootfs.mount: Deactivated successfully. Oct 2 19:07:48.479151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40577df99b578e911bb54c11ecfdf66177b31183305ba7ae05772e6360a870f5-shm.mount: Deactivated successfully. Oct 2 19:07:48.479285 systemd[1]: var-lib-kubelet-pods-abb233f1\x2da9fe\x2d4d11\x2da7c7\x2d7a6ee5d4e8ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7tcqv.mount: Deactivated successfully. Oct 2 19:07:48.479415 systemd[1]: var-lib-kubelet-pods-766735ec\x2d3310\x2d4447\x2db638\x2d6ab031198568-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtjlqx.mount: Deactivated successfully. Oct 2 19:07:48.479559 systemd[1]: var-lib-kubelet-pods-abb233f1\x2da9fe\x2d4d11\x2da7c7\x2d7a6ee5d4e8ef-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:07:49.134963 kubelet[2204]: E1002 19:07:49.134880 2204 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"