Oct 2 18:50:50.182783 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 2 18:50:50.182821 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 18:50:50.182844 kernel: efi: EFI v2.70 by EDK II Oct 2 18:50:50.182859 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71accf98 Oct 2 18:50:50.182872 kernel: ACPI: Early table checksum verification disabled Oct 2 18:50:50.182886 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 2 18:50:50.182902 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 2 18:50:50.182916 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 18:50:50.182930 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 2 18:50:50.182943 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 18:50:50.182962 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 2 18:50:50.182975 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 2 18:50:50.182989 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 2 18:50:50.183003 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 18:50:50.183019 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 2 18:50:50.183038 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 2 18:50:50.183052 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 2 18:50:50.183066 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 2 18:50:50.183081 kernel: printk: bootconsole [uart0] enabled Oct 2 18:50:50.183095 kernel: NUMA: Failed to initialise from firmware Oct 2 18:50:50.183110 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 18:50:50.183124 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Oct 2 18:50:50.183139 kernel: Zone ranges: Oct 2 18:50:50.183158 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 2 18:50:50.183173 kernel: DMA32 empty Oct 2 18:50:50.183208 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 2 18:50:50.183234 kernel: Movable zone start for each node Oct 2 18:50:50.183250 kernel: Early memory node ranges Oct 2 18:50:50.183265 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Oct 2 18:50:50.183279 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 2 18:50:50.183293 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 2 18:50:50.183308 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 2 18:50:50.183322 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 2 18:50:50.183336 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 2 18:50:50.183350 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 18:50:50.183365 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 2 18:50:50.183379 kernel: psci: probing for conduit method from ACPI. Oct 2 18:50:50.183393 kernel: psci: PSCIv1.0 detected in firmware. Oct 2 18:50:50.183412 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 18:50:50.183426 kernel: psci: Trusted OS migration not required Oct 2 18:50:50.183447 kernel: psci: SMC Calling Convention v1.1 Oct 2 18:50:50.183463 kernel: ACPI: SRAT not present Oct 2 18:50:50.183478 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 18:50:50.183497 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 18:50:50.183513 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 2 18:50:50.183528 kernel: Detected PIPT I-cache on CPU0 Oct 2 18:50:50.183543 kernel: CPU features: detected: GIC system register CPU interface Oct 2 18:50:50.183558 kernel: CPU features: detected: Spectre-v2 Oct 2 18:50:50.183573 kernel: CPU features: detected: Spectre-v3a Oct 2 18:50:50.183588 kernel: CPU features: detected: Spectre-BHB Oct 2 18:50:50.183603 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 18:50:50.183618 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 18:50:50.183633 kernel: CPU features: detected: ARM erratum 1742098 Oct 2 18:50:50.183648 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 2 18:50:50.183667 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 2 18:50:50.183682 kernel: Policy zone: Normal Oct 2 18:50:50.183700 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 18:50:50.183716 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 18:50:50.183732 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 18:50:50.183747 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 18:50:50.183762 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 18:50:50.183778 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 2 18:50:50.183793 kernel: Memory: 3826444K/4030464K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 204020K reserved, 0K cma-reserved) Oct 2 18:50:50.183809 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 18:50:50.183828 kernel: trace event string verifier disabled Oct 2 18:50:50.183844 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 18:50:50.183860 kernel: rcu: RCU event tracing is enabled. Oct 2 18:50:50.183875 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 18:50:50.183890 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 18:50:50.183905 kernel: Tracing variant of Tasks RCU enabled. Oct 2 18:50:50.183921 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 18:50:50.183936 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 18:50:50.183951 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 18:50:50.183966 kernel: GICv3: 96 SPIs implemented Oct 2 18:50:50.183980 kernel: GICv3: 0 Extended SPIs implemented Oct 2 18:50:50.183995 kernel: GICv3: Distributor has no Range Selector support Oct 2 18:50:50.184015 kernel: Root IRQ handler: gic_handle_irq Oct 2 18:50:50.184030 kernel: GICv3: 16 PPIs implemented Oct 2 18:50:50.184063 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 2 18:50:50.184080 kernel: ACPI: SRAT not present Oct 2 18:50:50.184095 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 2 18:50:50.184110 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 18:50:50.184126 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Oct 2 18:50:50.184141 kernel: GICv3: using LPI property table @0x00000004000c0000 Oct 2 18:50:50.184156 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 2 18:50:50.184171 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Oct 2 18:50:50.184186 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 2 18:50:50.184255 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 2 18:50:50.184272 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 2 18:50:50.184287 kernel: Console: colour dummy device 80x25 Oct 2 18:50:50.184302 kernel: printk: console [tty1] enabled Oct 2 18:50:50.184317 kernel: ACPI: Core revision 20210730 Oct 2 18:50:50.184333 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 2 18:50:50.184349 kernel: pid_max: default: 32768 minimum: 301 Oct 2 18:50:50.184364 kernel: LSM: Security Framework initializing Oct 2 18:50:50.184380 kernel: SELinux: Initializing. Oct 2 18:50:50.184395 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 18:50:50.184415 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 18:50:50.184430 kernel: rcu: Hierarchical SRCU implementation. Oct 2 18:50:50.184446 kernel: Platform MSI: ITS@0x10080000 domain created Oct 2 18:50:50.184461 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 2 18:50:50.184476 kernel: Remapping and enabling EFI services. Oct 2 18:50:50.184492 kernel: smp: Bringing up secondary CPUs ... Oct 2 18:50:50.184507 kernel: Detected PIPT I-cache on CPU1 Oct 2 18:50:50.184523 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 2 18:50:50.184539 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Oct 2 18:50:50.184558 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 2 18:50:50.184574 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 18:50:50.184589 kernel: SMP: Total of 2 processors activated. Oct 2 18:50:50.184605 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 18:50:50.184620 kernel: CPU features: detected: 32-bit EL1 Support Oct 2 18:50:50.184635 kernel: CPU features: detected: CRC32 instructions Oct 2 18:50:50.184650 kernel: CPU: All CPU(s) started at EL1 Oct 2 18:50:50.184666 kernel: alternatives: patching kernel code Oct 2 18:50:50.184681 kernel: devtmpfs: initialized Oct 2 18:50:50.184700 kernel: KASLR disabled due to lack of seed Oct 2 18:50:50.184716 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 18:50:50.184732 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 18:50:50.184758 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 18:50:50.184777 kernel: SMBIOS 3.0.0 present. Oct 2 18:50:50.184793 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 2 18:50:50.184809 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 18:50:50.184825 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 18:50:50.184842 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 18:50:50.184858 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 18:50:50.184873 kernel: audit: initializing netlink subsys (disabled) Oct 2 18:50:50.184890 kernel: audit: type=2000 audit(0.250:1): state=initialized audit_enabled=0 res=1 Oct 2 18:50:50.184910 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 18:50:50.184926 kernel: cpuidle: using governor menu Oct 2 18:50:50.184942 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 18:50:50.184958 kernel: ASID allocator initialised with 32768 entries Oct 2 18:50:50.184974 kernel: ACPI: bus type PCI registered Oct 2 18:50:50.184994 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 18:50:50.185010 kernel: Serial: AMBA PL011 UART driver Oct 2 18:50:50.185025 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 18:50:50.185041 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 18:50:50.185058 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 18:50:50.185073 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 18:50:50.185089 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 18:50:50.185105 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 18:50:50.185121 kernel: ACPI: Added _OSI(Module Device) Oct 2 18:50:50.185141 kernel: ACPI: Added _OSI(Processor Device) Oct 2 18:50:50.185157 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 18:50:50.185173 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 18:50:50.185216 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 18:50:50.185237 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 18:50:50.185254 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 18:50:50.185270 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 18:50:50.185287 kernel: ACPI: Interpreter enabled Oct 2 18:50:50.185303 kernel: ACPI: Using GIC for interrupt routing Oct 2 18:50:50.185324 kernel: ACPI: MCFG table detected, 1 entries Oct 2 18:50:50.185341 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 2 18:50:50.185753 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 18:50:50.185952 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 18:50:50.186143 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 18:50:50.186366 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 2 18:50:50.186562 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 2 18:50:50.186591 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 2 18:50:50.186608 kernel: acpiphp: Slot [1] registered Oct 2 18:50:50.186624 kernel: acpiphp: Slot [2] registered Oct 2 18:50:50.186640 kernel: acpiphp: Slot [3] registered Oct 2 18:50:50.186656 kernel: acpiphp: Slot [4] registered Oct 2 18:50:50.186672 kernel: acpiphp: Slot [5] registered Oct 2 18:50:50.186688 kernel: acpiphp: Slot [6] registered Oct 2 18:50:50.186703 kernel: acpiphp: Slot [7] registered Oct 2 18:50:50.186719 kernel: acpiphp: Slot [8] registered Oct 2 18:50:50.186739 kernel: acpiphp: Slot [9] registered Oct 2 18:50:50.186755 kernel: acpiphp: Slot [10] registered Oct 2 18:50:50.186771 kernel: acpiphp: Slot [11] registered Oct 2 18:50:50.186787 kernel: acpiphp: Slot [12] registered Oct 2 18:50:50.186803 kernel: acpiphp: Slot [13] registered Oct 2 18:50:50.186819 kernel: acpiphp: Slot [14] registered Oct 2 18:50:50.186835 kernel: acpiphp: Slot [15] registered Oct 2 18:50:50.186851 kernel: acpiphp: Slot [16] registered Oct 2 18:50:50.186867 kernel: acpiphp: Slot [17] registered Oct 2 18:50:50.186884 kernel: acpiphp: Slot [18] registered Oct 2 18:50:50.186904 kernel: acpiphp: Slot [19] registered Oct 2 18:50:50.186920 kernel: acpiphp: Slot [20] registered Oct 2 18:50:50.186936 kernel: acpiphp: Slot [21] registered Oct 2 18:50:50.186952 kernel: acpiphp: Slot [22] registered Oct 2 18:50:50.186968 kernel: acpiphp: Slot [23] registered Oct 2 18:50:50.186984 kernel: acpiphp: Slot [24] registered Oct 2 18:50:50.186999 kernel: acpiphp: Slot [25] registered Oct 2 18:50:50.187015 kernel: acpiphp: Slot [26] registered Oct 2 18:50:50.187031 kernel: acpiphp: Slot [27] registered Oct 2 18:50:50.187051 kernel: acpiphp: Slot [28] registered Oct 2 18:50:50.187067 kernel: acpiphp: Slot [29] registered Oct 2 18:50:50.187083 kernel: acpiphp: Slot [30] registered Oct 2 18:50:50.187099 kernel: acpiphp: Slot [31] registered Oct 2 18:50:50.187115 kernel: PCI host bridge to bus 0000:00 Oct 2 18:50:50.190356 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 2 18:50:50.190568 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 18:50:50.190743 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 2 18:50:50.190924 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 2 18:50:50.191145 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 2 18:50:50.191412 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 2 18:50:50.191618 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 2 18:50:50.191828 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 18:50:50.192025 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 2 18:50:50.192282 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 18:50:50.192494 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 18:50:50.192690 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 2 18:50:50.192891 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 2 18:50:50.193098 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 2 18:50:50.193338 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 18:50:50.193546 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 2 18:50:50.193756 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 2 18:50:50.193959 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 2 18:50:50.194157 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 2 18:50:50.194404 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 2 18:50:50.194693 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 2 18:50:50.194879 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 18:50:50.195065 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 2 18:50:50.195096 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 18:50:50.195114 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 18:50:50.195131 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 18:50:50.195148 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 18:50:50.195164 kernel: iommu: Default domain type: Translated Oct 2 18:50:50.195180 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 18:50:50.195220 kernel: vgaarb: loaded Oct 2 18:50:50.195239 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 18:50:50.195256 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 18:50:50.195278 kernel: PTP clock support registered Oct 2 18:50:50.195295 kernel: Registered efivars operations Oct 2 18:50:50.195312 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 18:50:50.195328 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 18:50:50.195344 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 18:50:50.195360 kernel: pnp: PnP ACPI init Oct 2 18:50:50.195641 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 2 18:50:50.195668 kernel: pnp: PnP ACPI: found 1 devices Oct 2 18:50:50.195685 kernel: NET: Registered PF_INET protocol family Oct 2 18:50:50.195707 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 18:50:50.195725 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 18:50:50.195741 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 18:50:50.195758 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 18:50:50.195774 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 18:50:50.195791 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 18:50:50.195808 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 18:50:50.195824 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 18:50:50.195841 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 18:50:50.195863 kernel: PCI: CLS 0 bytes, default 64 Oct 2 18:50:50.195879 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 2 18:50:50.195895 kernel: kvm [1]: HYP mode not available Oct 2 18:50:50.195912 kernel: Initialise system trusted keyrings Oct 2 18:50:50.195928 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 18:50:50.195944 kernel: Key type asymmetric registered Oct 2 18:50:50.195960 kernel: Asymmetric key parser 'x509' registered Oct 2 18:50:50.195976 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 18:50:50.195992 kernel: io scheduler mq-deadline registered Oct 2 18:50:50.196013 kernel: io scheduler kyber registered Oct 2 18:50:50.196029 kernel: io scheduler bfq registered Oct 2 18:50:50.196384 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 2 18:50:50.196413 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 18:50:50.196430 kernel: ACPI: button: Power Button [PWRB] Oct 2 18:50:50.196447 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 18:50:50.196464 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 2 18:50:50.196662 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 2 18:50:50.196692 kernel: printk: console [ttyS0] disabled Oct 2 18:50:50.196709 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 2 18:50:50.196726 kernel: printk: console [ttyS0] enabled Oct 2 18:50:50.196742 kernel: printk: bootconsole [uart0] disabled Oct 2 18:50:50.196759 kernel: thunder_xcv, ver 1.0 Oct 2 18:50:50.196775 kernel: thunder_bgx, ver 1.0 Oct 2 18:50:50.196791 kernel: nicpf, ver 1.0 Oct 2 18:50:50.196807 kernel: nicvf, ver 1.0 Oct 2 18:50:50.197012 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 18:50:50.200176 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T18:50:49 UTC (1696272649) Oct 2 18:50:50.200221 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 18:50:50.200240 kernel: NET: Registered PF_INET6 protocol family Oct 2 18:50:50.200257 kernel: Segment Routing with IPv6 Oct 2 18:50:50.200273 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 18:50:50.200289 kernel: NET: Registered PF_PACKET protocol family Oct 2 18:50:50.200305 kernel: Key type dns_resolver registered Oct 2 18:50:50.200322 kernel: registered taskstats version 1 Oct 2 18:50:50.200345 kernel: Loading compiled-in X.509 certificates Oct 2 18:50:50.200362 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 18:50:50.200378 kernel: Key type .fscrypt registered Oct 2 18:50:50.200394 kernel: Key type fscrypt-provisioning registered Oct 2 18:50:50.200410 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 18:50:50.200426 kernel: ima: Allocated hash algorithm: sha1 Oct 2 18:50:50.200442 kernel: ima: No architecture policies found Oct 2 18:50:50.200458 kernel: Freeing unused kernel memory: 34560K Oct 2 18:50:50.200474 kernel: Run /init as init process Oct 2 18:50:50.200494 kernel: with arguments: Oct 2 18:50:50.200510 kernel: /init Oct 2 18:50:50.200526 kernel: with environment: Oct 2 18:50:50.200541 kernel: HOME=/ Oct 2 18:50:50.200557 kernel: TERM=linux Oct 2 18:50:50.200573 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 18:50:50.200594 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 18:50:50.200614 systemd[1]: Detected virtualization amazon. Oct 2 18:50:50.200637 systemd[1]: Detected architecture arm64. Oct 2 18:50:50.200654 systemd[1]: Running in initrd. Oct 2 18:50:50.200672 systemd[1]: No hostname configured, using default hostname. Oct 2 18:50:50.200689 systemd[1]: Hostname set to . Oct 2 18:50:50.200707 systemd[1]: Initializing machine ID from VM UUID. Oct 2 18:50:50.200725 systemd[1]: Queued start job for default target initrd.target. Oct 2 18:50:50.200742 systemd[1]: Started systemd-ask-password-console.path. Oct 2 18:50:50.200760 systemd[1]: Reached target cryptsetup.target. Oct 2 18:50:50.200781 systemd[1]: Reached target paths.target. Oct 2 18:50:50.200798 systemd[1]: Reached target slices.target. Oct 2 18:50:50.200815 systemd[1]: Reached target swap.target. Oct 2 18:50:50.200832 systemd[1]: Reached target timers.target. Oct 2 18:50:50.200850 systemd[1]: Listening on iscsid.socket. Oct 2 18:50:50.200868 systemd[1]: Listening on iscsiuio.socket. Oct 2 18:50:50.200885 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 18:50:50.200903 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 18:50:50.200924 systemd[1]: Listening on systemd-journald.socket. Oct 2 18:50:50.200942 systemd[1]: Listening on systemd-networkd.socket. Oct 2 18:50:50.200959 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 18:50:50.200977 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 18:50:50.200994 systemd[1]: Reached target sockets.target. Oct 2 18:50:50.201012 systemd[1]: Starting kmod-static-nodes.service... Oct 2 18:50:50.201029 systemd[1]: Finished network-cleanup.service. Oct 2 18:50:50.201047 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 18:50:50.201065 systemd[1]: Starting systemd-journald.service... Oct 2 18:50:50.201086 systemd[1]: Starting systemd-modules-load.service... Oct 2 18:50:50.201104 systemd[1]: Starting systemd-resolved.service... Oct 2 18:50:50.201121 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 18:50:50.201139 systemd[1]: Finished kmod-static-nodes.service. Oct 2 18:50:50.201156 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 18:50:50.201174 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 18:50:50.201209 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 18:50:50.201231 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 18:50:50.201249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 18:50:50.201272 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 18:50:50.201293 systemd-journald[309]: Journal started Oct 2 18:50:50.201376 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2dff44910014e7d53ba232516fe5c1) is 8.0M, max 75.4M, 67.4M free. Oct 2 18:50:50.144799 systemd-modules-load[310]: Inserted module 'overlay' Oct 2 18:50:50.211874 systemd[1]: Started systemd-journald.service. Oct 2 18:50:50.222916 kernel: audit: type=1130 audit(1696272650.210:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.222989 kernel: Bridge firewalling registered Oct 2 18:50:50.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.227361 systemd-modules-load[310]: Inserted module 'br_netfilter' Oct 2 18:50:50.234714 systemd-resolved[311]: Positive Trust Anchors: Oct 2 18:50:50.235743 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 18:50:50.235803 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 18:50:50.266240 kernel: SCSI subsystem initialized Oct 2 18:50:50.267858 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 18:50:50.285801 kernel: audit: type=1130 audit(1696272650.269:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.271805 systemd[1]: Starting dracut-cmdline.service... Oct 2 18:50:50.306634 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 18:50:50.306721 kernel: device-mapper: uevent: version 1.0.3 Oct 2 18:50:50.309941 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 18:50:50.325934 systemd-modules-load[310]: Inserted module 'dm_multipath' Oct 2 18:50:50.330067 dracut-cmdline[327]: dracut-dracut-053 Oct 2 18:50:50.332700 systemd[1]: Finished systemd-modules-load.service. Oct 2 18:50:50.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.358347 systemd[1]: Starting systemd-sysctl.service... Oct 2 18:50:50.375250 kernel: audit: type=1130 audit(1696272650.347:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.375514 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 18:50:50.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.418218 systemd[1]: Finished systemd-sysctl.service. Oct 2 18:50:50.430230 kernel: audit: type=1130 audit(1696272650.418:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.637246 kernel: Loading iSCSI transport class v2.0-870. Oct 2 18:50:50.651218 kernel: iscsi: registered transport (tcp) Oct 2 18:50:50.678576 kernel: iscsi: registered transport (qla4xxx) Oct 2 18:50:50.678645 kernel: QLogic iSCSI HBA Driver Oct 2 18:50:50.851890 systemd-resolved[311]: Defaulting to hostname 'linux'. Oct 2 18:50:50.854499 kernel: random: crng init done Oct 2 18:50:50.858093 systemd[1]: Started systemd-resolved.service. Oct 2 18:50:50.872445 kernel: audit: type=1130 audit(1696272650.860:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.861805 systemd[1]: Reached target nss-lookup.target. Oct 2 18:50:50.923622 systemd[1]: Finished dracut-cmdline.service. Oct 2 18:50:50.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:50.928353 systemd[1]: Starting dracut-pre-udev.service... Oct 2 18:50:50.938346 kernel: audit: type=1130 audit(1696272650.924:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:51.021241 kernel: raid6: neonx8 gen() 6380 MB/s Oct 2 18:50:51.039227 kernel: raid6: neonx8 xor() 4570 MB/s Oct 2 18:50:51.057226 kernel: raid6: neonx4 gen() 6530 MB/s Oct 2 18:50:51.075226 kernel: raid6: neonx4 xor() 4671 MB/s Oct 2 18:50:51.093227 kernel: raid6: neonx2 gen() 5770 MB/s Oct 2 18:50:51.111227 kernel: raid6: neonx2 xor() 4385 MB/s Oct 2 18:50:51.129225 kernel: raid6: neonx1 gen() 4476 MB/s Oct 2 18:50:51.147223 kernel: raid6: neonx1 xor() 3561 MB/s Oct 2 18:50:51.165225 kernel: raid6: int64x8 gen() 3448 MB/s Oct 2 18:50:51.183225 kernel: raid6: int64x8 xor() 2052 MB/s Oct 2 18:50:51.201225 kernel: raid6: int64x4 gen() 3832 MB/s Oct 2 18:50:51.219226 kernel: raid6: int64x4 xor() 2168 MB/s Oct 2 18:50:51.237225 kernel: raid6: int64x2 gen() 3604 MB/s Oct 2 18:50:51.255222 kernel: raid6: int64x2 xor() 1927 MB/s Oct 2 18:50:51.273224 kernel: raid6: int64x1 gen() 2759 MB/s Oct 2 18:50:51.292820 kernel: raid6: int64x1 xor() 1398 MB/s Oct 2 18:50:51.292854 kernel: raid6: using algorithm neonx4 gen() 6530 MB/s Oct 2 18:50:51.292879 kernel: raid6: .... xor() 4671 MB/s, rmw enabled Oct 2 18:50:51.294704 kernel: raid6: using neon recovery algorithm Oct 2 18:50:51.313230 kernel: xor: measuring software checksum speed Oct 2 18:50:51.316227 kernel: 8regs : 9334 MB/sec Oct 2 18:50:51.319224 kernel: 32regs : 11111 MB/sec Oct 2 18:50:51.322958 kernel: arm64_neon : 9564 MB/sec Oct 2 18:50:51.322991 kernel: xor: using function: 32regs (11111 MB/sec) Oct 2 18:50:51.413242 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 18:50:51.452496 systemd[1]: Finished dracut-pre-udev.service. Oct 2 18:50:51.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:51.462000 audit: BPF prog-id=7 op=LOAD Oct 2 18:50:51.466441 kernel: audit: type=1130 audit(1696272651.453:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:51.466495 kernel: audit: type=1334 audit(1696272651.462:9): prog-id=7 op=LOAD Oct 2 18:50:51.464631 systemd[1]: Starting systemd-udevd.service... Oct 2 18:50:51.469413 kernel: audit: type=1334 audit(1696272651.462:10): prog-id=8 op=LOAD Oct 2 18:50:51.462000 audit: BPF prog-id=8 op=LOAD Oct 2 18:50:51.504334 systemd-udevd[508]: Using default interface naming scheme 'v252'. Oct 2 18:50:51.515535 systemd[1]: Started systemd-udevd.service. Oct 2 18:50:51.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:51.523910 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 18:50:51.585088 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Oct 2 18:50:51.697359 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 18:50:51.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:51.702076 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 18:50:51.819648 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 18:50:51.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:51.960223 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 18:50:51.960297 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 2 18:50:51.972281 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 18:50:51.972602 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 18:50:51.987964 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 2 18:50:51.988024 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 18:50:51.988353 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:0b:b6:20:f0:03 Oct 2 18:50:51.993233 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 18:50:51.999530 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 18:50:51.999586 kernel: GPT:9289727 != 16777215 Oct 2 18:50:51.999610 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 18:50:52.003282 kernel: GPT:9289727 != 16777215 Oct 2 18:50:52.003315 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 18:50:52.006896 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 18:50:52.011456 (udev-worker)[573]: Network interface NamePolicy= disabled on kernel command line. Oct 2 18:50:52.095229 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (571) Oct 2 18:50:52.206522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 18:50:52.304961 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 18:50:52.319132 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 18:50:52.324533 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 18:50:52.339091 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 18:50:52.353000 systemd[1]: Starting disk-uuid.service... Oct 2 18:50:52.376658 disk-uuid[670]: Primary Header is updated. Oct 2 18:50:52.376658 disk-uuid[670]: Secondary Entries is updated. Oct 2 18:50:52.376658 disk-uuid[670]: Secondary Header is updated. Oct 2 18:50:52.386226 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 18:50:52.394234 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 18:50:53.402178 disk-uuid[671]: The operation has completed successfully. Oct 2 18:50:53.404519 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 18:50:53.693649 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 18:50:53.694251 systemd[1]: Finished disk-uuid.service. Oct 2 18:50:53.700775 systemd[1]: Starting verity-setup.service... Oct 2 18:50:53.714685 kernel: kauditd_printk_skb: 3 callbacks suppressed Oct 2 18:50:53.714750 kernel: audit: type=1130 audit(1696272653.698:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:53.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:53.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:53.728833 kernel: audit: type=1131 audit(1696272653.698:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:53.754239 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 18:50:53.839617 systemd[1]: Found device dev-mapper-usr.device. Oct 2 18:50:53.849445 systemd[1]: Mounting sysusr-usr.mount... Oct 2 18:50:53.855609 systemd[1]: Finished verity-setup.service. Oct 2 18:50:53.867045 kernel: audit: type=1130 audit(1696272653.856:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:53.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:53.950234 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 18:50:53.951538 systemd[1]: Mounted sysusr-usr.mount. Oct 2 18:50:53.954567 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 18:50:53.958560 systemd[1]: Starting ignition-setup.service... Oct 2 18:50:53.961390 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 18:50:53.998571 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 18:50:53.998633 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 18:50:54.001094 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 18:50:54.011765 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 18:50:54.043400 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 18:50:54.091793 systemd[1]: Finished ignition-setup.service. Oct 2 18:50:54.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.096339 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 18:50:54.107240 kernel: audit: type=1130 audit(1696272654.093:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.334436 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 18:50:54.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.347223 kernel: audit: type=1130 audit(1696272654.335:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.346000 audit: BPF prog-id=9 op=LOAD Oct 2 18:50:54.348323 systemd[1]: Starting systemd-networkd.service... Oct 2 18:50:54.353993 kernel: audit: type=1334 audit(1696272654.346:19): prog-id=9 op=LOAD Oct 2 18:50:54.406350 systemd-networkd[1184]: lo: Link UP Oct 2 18:50:54.406374 systemd-networkd[1184]: lo: Gained carrier Oct 2 18:50:54.410105 systemd-networkd[1184]: Enumeration completed Oct 2 18:50:54.425085 kernel: audit: type=1130 audit(1696272654.411:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.410649 systemd-networkd[1184]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 18:50:54.410863 systemd[1]: Started systemd-networkd.service. Oct 2 18:50:54.412885 systemd[1]: Reached target network.target. Oct 2 18:50:54.417109 systemd[1]: Starting iscsiuio.service... Oct 2 18:50:54.439030 systemd[1]: Started iscsiuio.service. Oct 2 18:50:54.444919 systemd[1]: Starting iscsid.service... Oct 2 18:50:54.461609 systemd-networkd[1184]: eth0: Link UP Oct 2 18:50:54.464610 iscsid[1189]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 18:50:54.464610 iscsid[1189]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 18:50:54.464610 iscsid[1189]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 18:50:54.464610 iscsid[1189]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 18:50:54.464610 iscsid[1189]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 18:50:54.464610 iscsid[1189]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 18:50:54.528676 kernel: audit: type=1130 audit(1696272654.441:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.530149 kernel: audit: type=1130 audit(1696272654.492:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.461617 systemd-networkd[1184]: eth0: Gained carrier Oct 2 18:50:54.491061 systemd[1]: Started iscsid.service. Oct 2 18:50:54.495594 systemd[1]: Starting dracut-initqueue.service... Oct 2 18:50:54.536401 systemd-networkd[1184]: eth0: DHCPv4 address 172.31.28.169/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 18:50:54.554826 systemd[1]: Finished dracut-initqueue.service. Oct 2 18:50:54.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.556569 systemd[1]: Reached target remote-fs-pre.target. Oct 2 18:50:54.571371 kernel: audit: type=1130 audit(1696272654.553:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.567769 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 18:50:54.569819 systemd[1]: Reached target remote-fs.target. Oct 2 18:50:54.588684 systemd[1]: Starting dracut-pre-mount.service... Oct 2 18:50:54.622365 systemd[1]: Finished dracut-pre-mount.service. Oct 2 18:50:54.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.827802 ignition[1102]: Ignition 2.14.0 Oct 2 18:50:54.827834 ignition[1102]: Stage: fetch-offline Oct 2 18:50:54.828451 ignition[1102]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 18:50:54.828643 ignition[1102]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 18:50:54.848956 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 18:50:54.852051 ignition[1102]: Ignition finished successfully Oct 2 18:50:54.855377 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 18:50:54.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:54.860789 systemd[1]: Starting ignition-fetch.service... Oct 2 18:50:54.891363 ignition[1208]: Ignition 2.14.0 Oct 2 18:50:54.891393 ignition[1208]: Stage: fetch Oct 2 18:50:54.891746 ignition[1208]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 18:50:54.891806 ignition[1208]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 18:50:54.906964 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 18:50:54.909666 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 18:50:54.935024 ignition[1208]: INFO : PUT result: OK Oct 2 18:50:54.966970 ignition[1208]: DEBUG : parsed url from cmdline: "" Oct 2 18:50:54.966970 ignition[1208]: INFO : no config URL provided Oct 2 18:50:54.966970 ignition[1208]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 18:50:54.972970 ignition[1208]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 18:50:54.972970 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 18:50:54.977649 ignition[1208]: INFO : PUT result: OK Oct 2 18:50:54.979361 ignition[1208]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 18:50:54.983265 ignition[1208]: INFO : GET result: OK Oct 2 18:50:54.984804 ignition[1208]: DEBUG : parsing config with SHA512: 49347ed538070e28130561435352757f776525f7c9f597096e9b4fff929090ad05a74e1c2945570c5b1e322f5ca0a4d03964678ca229d031622040641117a575 Oct 2 18:50:55.014691 unknown[1208]: fetched base config from "system" Oct 2 18:50:55.014720 unknown[1208]: fetched base config from "system" Oct 2 18:50:55.015789 ignition[1208]: fetch: fetch complete Oct 2 18:50:55.014736 unknown[1208]: fetched user config from "aws" Oct 2 18:50:55.015803 ignition[1208]: fetch: fetch passed Oct 2 18:50:55.020848 systemd[1]: Finished ignition-fetch.service. Oct 2 18:50:55.015890 ignition[1208]: Ignition finished successfully Oct 2 18:50:55.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:55.031878 systemd[1]: Starting ignition-kargs.service... Oct 2 18:50:55.062089 ignition[1214]: Ignition 2.14.0 Oct 2 18:50:55.062118 ignition[1214]: Stage: kargs Oct 2 18:50:55.062513 ignition[1214]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 18:50:55.062575 ignition[1214]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 18:50:55.076912 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 18:50:55.079529 ignition[1214]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 18:50:55.083039 ignition[1214]: INFO : PUT result: OK Oct 2 18:50:55.088251 ignition[1214]: kargs: kargs passed Oct 2 18:50:55.089847 ignition[1214]: Ignition finished successfully Oct 2 18:50:55.093153 systemd[1]: Finished ignition-kargs.service. Oct 2 18:50:55.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:55.098166 systemd[1]: Starting ignition-disks.service... Oct 2 18:50:55.128592 ignition[1221]: Ignition 2.14.0 Oct 2 18:50:55.128622 ignition[1221]: Stage: disks Oct 2 18:50:55.128980 ignition[1221]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 18:50:55.129038 ignition[1221]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 18:50:55.145215 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 18:50:55.147598 ignition[1221]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 18:50:55.151370 ignition[1221]: INFO : PUT result: OK Oct 2 18:50:55.155995 ignition[1221]: disks: disks passed Oct 2 18:50:55.156112 ignition[1221]: Ignition finished successfully Oct 2 18:50:55.160302 systemd[1]: Finished ignition-disks.service. Oct 2 18:50:55.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:55.163087 systemd[1]: Reached target initrd-root-device.target. Oct 2 18:50:55.165600 systemd[1]: Reached target local-fs-pre.target. Oct 2 18:50:55.167315 systemd[1]: Reached target local-fs.target. Oct 2 18:50:55.169142 systemd[1]: Reached target sysinit.target. Oct 2 18:50:55.169810 systemd[1]: Reached target basic.target. Oct 2 18:50:55.171661 systemd[1]: Starting systemd-fsck-root.service... Oct 2 18:50:55.234644 systemd-fsck[1229]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 18:50:55.244756 systemd[1]: Finished systemd-fsck-root.service. Oct 2 18:50:55.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:55.248156 systemd[1]: Mounting sysroot.mount... Oct 2 18:50:55.274230 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 18:50:55.276596 systemd[1]: Mounted sysroot.mount. Oct 2 18:50:55.278183 systemd[1]: Reached target initrd-root-fs.target. Oct 2 18:50:55.293744 systemd[1]: Mounting sysroot-usr.mount... Oct 2 18:50:55.297491 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 18:50:55.297601 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 18:50:55.297666 systemd[1]: Reached target ignition-diskful.target. Oct 2 18:50:55.329582 systemd[1]: Mounted sysroot-usr.mount. Oct 2 18:50:55.344364 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 18:50:55.349230 systemd[1]: Starting initrd-setup-root.service... Oct 2 18:50:55.376204 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1246) Oct 2 18:50:55.384606 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 18:50:55.384672 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 18:50:55.386873 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 18:50:55.393272 initrd-setup-root[1251]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 18:50:55.395841 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 18:50:55.399355 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 18:50:55.424435 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Oct 2 18:50:55.443608 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 18:50:55.462619 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 18:50:55.699258 systemd[1]: Finished initrd-setup-root.service. Oct 2 18:50:55.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:55.703944 systemd[1]: Starting ignition-mount.service... Oct 2 18:50:55.709937 systemd[1]: Starting sysroot-boot.service... Oct 2 18:50:55.739337 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 18:50:55.739515 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 18:50:55.770796 systemd[1]: Finished sysroot-boot.service. Oct 2 18:50:55.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:55.786854 ignition[1313]: INFO : Ignition 2.14.0 Oct 2 18:50:55.786854 ignition[1313]: INFO : Stage: mount Oct 2 18:50:55.793484 ignition[1313]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 18:50:55.793484 ignition[1313]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 18:50:55.806663 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 18:50:55.806663 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 18:50:55.812943 ignition[1313]: INFO : PUT result: OK Oct 2 18:50:55.818179 ignition[1313]: INFO : mount: mount passed Oct 2 18:50:55.820022 ignition[1313]: INFO : Ignition finished successfully Oct 2 18:50:55.823273 systemd[1]: Finished ignition-mount.service. Oct 2 18:50:55.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:50:55.827941 systemd[1]: Starting ignition-files.service... Oct 2 18:50:55.851617 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 18:50:55.875225 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1321) Oct 2 18:50:55.880948 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 18:50:55.880989 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 18:50:55.883209 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 18:50:55.890213 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 18:50:55.895469 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 18:50:55.929770 ignition[1340]: INFO : Ignition 2.14.0 Oct 2 18:50:55.929770 ignition[1340]: INFO : Stage: files Oct 2 18:50:55.933329 ignition[1340]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 18:50:55.933329 ignition[1340]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 18:50:55.950886 ignition[1340]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 18:50:55.953990 ignition[1340]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 18:50:55.956999 ignition[1340]: INFO : PUT result: OK Oct 2 18:50:55.963533 ignition[1340]: DEBUG : files: compiled without relabeling support, skipping Oct 2 18:50:55.968082 ignition[1340]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 18:50:55.971160 ignition[1340]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 18:50:56.012079 ignition[1340]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 18:50:56.015116 ignition[1340]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 18:50:56.019099 unknown[1340]: wrote ssh authorized keys file for user: core Oct 2 18:50:56.021421 ignition[1340]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 18:50:56.025351 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 18:50:56.029492 ignition[1340]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 18:50:56.341517 ignition[1340]: INFO : GET result: OK Oct 2 18:50:56.443370 systemd-networkd[1184]: eth0: Gained IPv6LL Oct 2 18:50:56.784333 ignition[1340]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 18:50:56.789547 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 18:50:56.789547 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 18:50:56.789547 ignition[1340]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-arm64.tar.gz: attempt #1 Oct 2 18:50:56.881435 ignition[1340]: INFO : GET result: OK Oct 2 18:50:57.042249 ignition[1340]: DEBUG : file matches expected sum of: ebd055e9b2888624d006decd582db742131ed815d059d529ba21eaf864becca98a84b20a10eec91051b9d837c6855d28d5042bf5e9a454f4540aec6b82d37e96 Oct 2 18:50:57.047640 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 18:50:57.047640 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 18:50:57.047640 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 18:50:57.069987 ignition[1340]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3995384028" Oct 2 18:50:57.069987 ignition[1340]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3995384028": device or resource busy Oct 2 18:50:57.069987 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3995384028", trying btrfs: device or resource busy Oct 2 18:50:57.069987 ignition[1340]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3995384028" Oct 2 18:50:57.083413 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1343) Oct 2 18:50:57.083450 ignition[1340]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3995384028" Oct 2 18:50:57.086356 ignition[1340]: INFO : op(3): [started] unmounting "/mnt/oem3995384028" Oct 2 18:50:57.088882 ignition[1340]: INFO : op(3): [finished] unmounting "/mnt/oem3995384028" Oct 2 18:50:57.088882 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 18:50:57.095360 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 18:50:57.095360 ignition[1340]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubeadm: attempt #1 Oct 2 18:50:57.183108 ignition[1340]: INFO : GET result: OK Oct 2 18:50:58.560148 ignition[1340]: DEBUG : file matches expected sum of: daab8965a4f617d1570d04c031ab4d55fff6aa13a61f0e4045f2338947f9fb0ee3a80fdee57cfe86db885390595460342181e1ec52b89f127ef09c393ae3db7f Oct 2 18:50:58.565249 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 18:50:58.565249 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 18:50:58.565249 ignition[1340]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubelet: attempt #1 Oct 2 18:50:58.611324 ignition[1340]: INFO : GET result: OK Oct 2 18:51:00.400725 ignition[1340]: DEBUG : file matches expected sum of: 7b872a34d86e8aa75455a62a20f5cf16426de2ae54ffb8e0250fead920838df818201b8512c2f8bf4c939e5b21babab371f3a48803e2e861da9e6f8cdd022324 Oct 2 18:51:00.405961 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 18:51:00.409341 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 18:51:00.413157 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 18:51:00.416755 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 18:51:00.420447 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 18:51:00.423956 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 18:51:00.427942 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 18:51:00.442869 ignition[1340]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem668045758" Oct 2 18:51:00.445977 ignition[1340]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem668045758": device or resource busy Oct 2 18:51:00.445977 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem668045758", trying btrfs: device or resource busy Oct 2 18:51:00.445977 ignition[1340]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem668045758" Oct 2 18:51:00.464737 ignition[1340]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem668045758" Oct 2 18:51:00.464737 ignition[1340]: INFO : op(6): [started] unmounting "/mnt/oem668045758" Oct 2 18:51:00.464737 ignition[1340]: INFO : op(6): [finished] unmounting "/mnt/oem668045758" Oct 2 18:51:00.464737 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 18:51:00.464737 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 18:51:00.464737 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 18:51:00.456733 systemd[1]: mnt-oem668045758.mount: Deactivated successfully. Oct 2 18:51:00.489692 ignition[1340]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4284233735" Oct 2 18:51:00.492756 ignition[1340]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4284233735": device or resource busy Oct 2 18:51:00.492756 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4284233735", trying btrfs: device or resource busy Oct 2 18:51:00.492756 ignition[1340]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4284233735" Oct 2 18:51:00.505748 ignition[1340]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4284233735" Oct 2 18:51:00.505748 ignition[1340]: INFO : op(9): [started] unmounting "/mnt/oem4284233735" Oct 2 18:51:00.504897 systemd[1]: mnt-oem4284233735.mount: Deactivated successfully. Oct 2 18:51:00.512815 ignition[1340]: INFO : op(9): [finished] unmounting "/mnt/oem4284233735" Oct 2 18:51:00.512815 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 18:51:00.512815 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 18:51:00.512815 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 18:51:00.537540 ignition[1340]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem513406277" Oct 2 18:51:00.543721 ignition[1340]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem513406277": device or resource busy Oct 2 18:51:00.543721 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem513406277", trying btrfs: device or resource busy Oct 2 18:51:00.543721 ignition[1340]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem513406277" Oct 2 18:51:00.543721 ignition[1340]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem513406277" Oct 2 18:51:00.543721 ignition[1340]: INFO : op(c): [started] unmounting "/mnt/oem513406277" Oct 2 18:51:00.543721 ignition[1340]: INFO : op(c): [finished] unmounting "/mnt/oem513406277" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(d): [started] processing unit "nvidia.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(d): [finished] processing unit "nvidia.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(f): [started] processing unit "amazon-ssm-agent.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(f): op(10): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(f): op(10): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(f): [finished] processing unit "amazon-ssm-agent.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Oct 2 18:51:00.543721 ignition[1340]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Oct 2 18:51:00.640405 kernel: kauditd_printk_skb: 9 callbacks suppressed Oct 2 18:51:00.640444 kernel: audit: type=1130 audit(1696272660.617:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(15): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(16): [started] setting preset to enabled for "prepare-critools.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 18:51:00.640546 ignition[1340]: INFO : files: files passed Oct 2 18:51:00.640546 ignition[1340]: INFO : Ignition finished successfully Oct 2 18:51:00.614738 systemd[1]: Finished ignition-files.service. Oct 2 18:51:00.640927 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 18:51:00.672245 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 18:51:00.677441 systemd[1]: Starting ignition-quench.service... Oct 2 18:51:00.725416 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 18:51:00.725751 systemd[1]: Finished ignition-quench.service. Oct 2 18:51:00.746937 kernel: audit: type=1130 audit(1696272660.728:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.746978 kernel: audit: type=1131 audit(1696272660.728:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.758736 initrd-setup-root-after-ignition[1365]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 18:51:00.763904 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 18:51:00.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.764597 systemd[1]: Reached target ignition-complete.target. Oct 2 18:51:00.784217 kernel: audit: type=1130 audit(1696272660.763:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.766349 systemd[1]: Starting initrd-parse-etc.service... Oct 2 18:51:00.828630 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 18:51:00.829438 systemd[1]: Finished initrd-parse-etc.service. Oct 2 18:51:00.864077 kernel: audit: type=1130 audit(1696272660.831:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.864150 kernel: audit: type=1131 audit(1696272660.839:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.840427 systemd[1]: Reached target initrd-fs.target. Oct 2 18:51:00.849685 systemd[1]: Reached target initrd.target. Oct 2 18:51:00.852840 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 18:51:00.854384 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 18:51:00.902504 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 18:51:00.916172 kernel: audit: type=1130 audit(1696272660.903:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.910862 systemd[1]: Starting initrd-cleanup.service... Oct 2 18:51:00.942342 systemd[1]: Stopped target nss-lookup.target. Oct 2 18:51:00.946035 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 18:51:00.949868 systemd[1]: Stopped target timers.target. Oct 2 18:51:00.953170 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 18:51:00.953611 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 18:51:00.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.966783 systemd[1]: Stopped target initrd.target. Oct 2 18:51:00.968357 kernel: audit: type=1131 audit(1696272660.957:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:00.970246 systemd[1]: Stopped target basic.target. Oct 2 18:51:00.973460 systemd[1]: Stopped target ignition-complete.target. Oct 2 18:51:00.976958 systemd[1]: Stopped target ignition-diskful.target. Oct 2 18:51:00.980673 systemd[1]: Stopped target initrd-root-device.target. Oct 2 18:51:00.984512 systemd[1]: Stopped target remote-fs.target. Oct 2 18:51:00.987827 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 18:51:00.991390 systemd[1]: Stopped target sysinit.target. Oct 2 18:51:00.994670 systemd[1]: Stopped target local-fs.target. Oct 2 18:51:00.998005 systemd[1]: Stopped target local-fs-pre.target. Oct 2 18:51:01.001467 systemd[1]: Stopped target swap.target. Oct 2 18:51:01.004555 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 18:51:01.006666 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 18:51:01.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.010246 systemd[1]: Stopped target cryptsetup.target. Oct 2 18:51:01.033320 kernel: audit: type=1131 audit(1696272661.008:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.033368 kernel: audit: type=1131 audit(1696272661.021:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.020900 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 18:51:01.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.021130 systemd[1]: Stopped dracut-initqueue.service. Oct 2 18:51:01.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.023300 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 18:51:01.057280 iscsid[1189]: iscsid shutting down. Oct 2 18:51:01.023613 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 18:51:01.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.033664 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 18:51:01.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.034655 systemd[1]: Stopped ignition-files.service. Oct 2 18:51:01.039405 systemd[1]: Stopping ignition-mount.service... Oct 2 18:51:01.057562 systemd[1]: Stopping iscsid.service... Oct 2 18:51:01.060149 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 18:51:01.060879 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 18:51:01.065951 systemd[1]: Stopping sysroot-boot.service... Oct 2 18:51:01.067453 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 18:51:01.067858 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 18:51:01.070326 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 18:51:01.138065 ignition[1378]: INFO : Ignition 2.14.0 Oct 2 18:51:01.138065 ignition[1378]: INFO : Stage: umount Oct 2 18:51:01.138065 ignition[1378]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 18:51:01.138065 ignition[1378]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 18:51:01.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.070766 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 18:51:01.155751 ignition[1378]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 18:51:01.155751 ignition[1378]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 18:51:01.080141 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 18:51:01.080376 systemd[1]: Stopped iscsid.service. Oct 2 18:51:01.095673 systemd[1]: Stopping iscsiuio.service... Oct 2 18:51:01.146946 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 18:51:01.147178 systemd[1]: Stopped iscsiuio.service. Oct 2 18:51:01.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.150310 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 18:51:01.150518 systemd[1]: Finished initrd-cleanup.service. Oct 2 18:51:01.176171 ignition[1378]: INFO : PUT result: OK Oct 2 18:51:01.181667 ignition[1378]: INFO : umount: umount passed Oct 2 18:51:01.183593 ignition[1378]: INFO : Ignition finished successfully Oct 2 18:51:01.187545 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 18:51:01.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.187725 systemd[1]: Stopped ignition-mount.service. Oct 2 18:51:01.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.190538 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 18:51:01.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.190625 systemd[1]: Stopped ignition-disks.service. Oct 2 18:51:01.194317 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 18:51:01.194407 systemd[1]: Stopped ignition-kargs.service. Oct 2 18:51:01.196177 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 18:51:01.196287 systemd[1]: Stopped ignition-fetch.service. Oct 2 18:51:01.198043 systemd[1]: Stopped target network.target. Oct 2 18:51:01.212165 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 18:51:01.212297 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 18:51:01.213474 systemd[1]: Stopped target paths.target. Oct 2 18:51:01.213841 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 18:51:01.226444 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 18:51:01.228427 systemd[1]: Stopped target slices.target. Oct 2 18:51:01.228906 systemd[1]: Stopped target sockets.target. Oct 2 18:51:01.229653 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 18:51:01.229730 systemd[1]: Closed iscsid.socket. Oct 2 18:51:01.229991 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 18:51:01.230061 systemd[1]: Closed iscsiuio.socket. Oct 2 18:51:01.230647 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 18:51:01.230734 systemd[1]: Stopped ignition-setup.service. Oct 2 18:51:01.231301 systemd[1]: Stopping systemd-networkd.service... Oct 2 18:51:01.231867 systemd[1]: Stopping systemd-resolved.service... Oct 2 18:51:01.232471 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 18:51:01.232640 systemd[1]: Stopped sysroot-boot.service. Oct 2 18:51:01.233258 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 18:51:01.233337 systemd[1]: Stopped initrd-setup-root.service. Oct 2 18:51:01.251913 systemd-networkd[1184]: eth0: DHCPv6 lease lost Oct 2 18:51:01.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.295000 audit: BPF prog-id=9 op=UNLOAD Oct 2 18:51:01.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.289762 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 18:51:01.289979 systemd[1]: Stopped systemd-networkd.service. Oct 2 18:51:01.314000 audit: BPF prog-id=6 op=UNLOAD Oct 2 18:51:01.292123 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 18:51:01.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.292210 systemd[1]: Closed systemd-networkd.socket. Oct 2 18:51:01.295400 systemd[1]: Stopping network-cleanup.service... Oct 2 18:51:01.297276 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 18:51:01.297411 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 18:51:01.299501 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 18:51:01.299594 systemd[1]: Stopped systemd-sysctl.service. Oct 2 18:51:01.301586 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 18:51:01.301668 systemd[1]: Stopped systemd-modules-load.service. Oct 2 18:51:01.303773 systemd[1]: Stopping systemd-udevd.service... Oct 2 18:51:01.306058 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 18:51:01.306336 systemd[1]: Stopped systemd-resolved.service. Oct 2 18:51:01.313885 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 18:51:01.314158 systemd[1]: Stopped systemd-udevd.service. Oct 2 18:51:01.345046 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 18:51:01.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.345164 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 18:51:01.347325 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 18:51:01.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.347411 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 18:51:01.349329 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 18:51:01.349451 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 18:51:01.354822 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 18:51:01.354943 systemd[1]: Stopped dracut-cmdline.service. Oct 2 18:51:01.356883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 18:51:01.356974 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 18:51:01.360302 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 18:51:01.362239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 18:51:01.362367 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 18:51:01.395590 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 18:51:01.397872 systemd[1]: Stopped network-cleanup.service. Oct 2 18:51:01.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.412880 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 18:51:01.413889 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 18:51:01.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:01.422852 systemd[1]: Reached target initrd-switch-root.target. Oct 2 18:51:01.430377 systemd[1]: Starting initrd-switch-root.service... Oct 2 18:51:01.457496 systemd[1]: mnt-oem513406277.mount: Deactivated successfully. Oct 2 18:51:01.459665 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 18:51:01.459809 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 18:51:01.471690 systemd[1]: Switching root. Oct 2 18:51:01.500698 systemd-journald[309]: Journal stopped Oct 2 18:51:07.536271 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Oct 2 18:51:07.536791 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 18:51:07.536953 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 18:51:07.537059 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 18:51:07.537095 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 18:51:07.537127 kernel: SELinux: policy capability open_perms=1 Oct 2 18:51:07.537160 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 18:51:07.537310 kernel: SELinux: policy capability always_check_network=0 Oct 2 18:51:07.537350 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 18:51:07.537381 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 18:51:07.537478 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 18:51:07.537510 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 18:51:07.537546 systemd[1]: Successfully loaded SELinux policy in 90.576ms. Oct 2 18:51:07.537705 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.560ms. Oct 2 18:51:07.537744 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 18:51:07.537776 systemd[1]: Detected virtualization amazon. Oct 2 18:51:07.537806 systemd[1]: Detected architecture arm64. Oct 2 18:51:07.537837 systemd[1]: Detected first boot. Oct 2 18:51:07.537870 systemd[1]: Initializing machine ID from VM UUID. Oct 2 18:51:07.537900 systemd[1]: Populated /etc with preset unit settings. Oct 2 18:51:07.537938 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 18:51:07.537976 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 18:51:07.538012 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 18:51:07.538115 kernel: kauditd_printk_skb: 39 callbacks suppressed Oct 2 18:51:07.538148 kernel: audit: type=1334 audit(1696272666.949:82): prog-id=12 op=LOAD Oct 2 18:51:07.538178 kernel: audit: type=1334 audit(1696272666.949:83): prog-id=3 op=UNLOAD Oct 2 18:51:07.538226 kernel: audit: type=1334 audit(1696272666.951:84): prog-id=13 op=LOAD Oct 2 18:51:07.538264 kernel: audit: type=1334 audit(1696272666.953:85): prog-id=14 op=LOAD Oct 2 18:51:07.538296 kernel: audit: type=1334 audit(1696272666.953:86): prog-id=4 op=UNLOAD Oct 2 18:51:07.538326 kernel: audit: type=1334 audit(1696272666.954:87): prog-id=5 op=UNLOAD Oct 2 18:51:07.538354 kernel: audit: type=1334 audit(1696272666.959:88): prog-id=15 op=LOAD Oct 2 18:51:07.538387 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 18:51:07.538420 kernel: audit: type=1334 audit(1696272666.959:89): prog-id=12 op=UNLOAD Oct 2 18:51:07.538452 systemd[1]: Stopped initrd-switch-root.service. Oct 2 18:51:07.538486 kernel: audit: type=1334 audit(1696272666.961:90): prog-id=16 op=LOAD Oct 2 18:51:07.538517 kernel: audit: type=1334 audit(1696272666.964:91): prog-id=17 op=LOAD Oct 2 18:51:07.538552 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 18:51:07.538583 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 18:51:07.538616 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 18:51:07.538646 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 18:51:07.538679 systemd[1]: Created slice system-getty.slice. Oct 2 18:51:07.538712 systemd[1]: Created slice system-modprobe.slice. Oct 2 18:51:07.538745 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 18:51:07.538786 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 18:51:07.538818 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 18:51:07.538850 systemd[1]: Created slice user.slice. Oct 2 18:51:07.538881 systemd[1]: Started systemd-ask-password-console.path. Oct 2 18:51:07.538912 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 18:51:07.538944 systemd[1]: Set up automount boot.automount. Oct 2 18:51:07.538980 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 18:51:07.539013 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 18:51:07.539047 systemd[1]: Stopped target initrd-fs.target. Oct 2 18:51:07.539106 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 18:51:07.539145 systemd[1]: Reached target integritysetup.target. Oct 2 18:51:07.539180 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 18:51:07.539250 systemd[1]: Reached target remote-fs.target. Oct 2 18:51:07.539291 systemd[1]: Reached target slices.target. Oct 2 18:51:07.539326 systemd[1]: Reached target swap.target. Oct 2 18:51:07.539360 systemd[1]: Reached target torcx.target. Oct 2 18:51:07.539392 systemd[1]: Reached target veritysetup.target. Oct 2 18:51:07.539423 systemd[1]: Listening on systemd-coredump.socket. Oct 2 18:51:07.539458 systemd[1]: Listening on systemd-initctl.socket. Oct 2 18:51:07.539498 systemd[1]: Listening on systemd-networkd.socket. Oct 2 18:51:07.539539 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 18:51:07.539571 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 18:51:07.539601 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 18:51:07.539635 systemd[1]: Mounting dev-hugepages.mount... Oct 2 18:51:07.539668 systemd[1]: Mounting dev-mqueue.mount... Oct 2 18:51:07.539698 systemd[1]: Mounting media.mount... Oct 2 18:51:07.539730 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 18:51:07.539767 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 18:51:07.539804 systemd[1]: Mounting tmp.mount... Oct 2 18:51:07.539839 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 18:51:07.539869 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 18:51:07.539901 systemd[1]: Starting kmod-static-nodes.service... Oct 2 18:51:07.539945 systemd[1]: Starting modprobe@configfs.service... Oct 2 18:51:07.539984 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 18:51:07.540017 systemd[1]: Starting modprobe@drm.service... Oct 2 18:51:07.540048 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 18:51:07.540081 systemd[1]: Starting modprobe@fuse.service... Oct 2 18:51:07.540117 systemd[1]: Starting modprobe@loop.service... Oct 2 18:51:07.540151 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 18:51:07.540182 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 18:51:07.540244 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 18:51:07.540276 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 18:51:07.540306 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 18:51:07.540336 systemd[1]: Stopped systemd-journald.service. Oct 2 18:51:07.540364 kernel: loop: module loaded Oct 2 18:51:07.540395 systemd[1]: Starting systemd-journald.service... Oct 2 18:51:07.540432 systemd[1]: Starting systemd-modules-load.service... Oct 2 18:51:07.540465 systemd[1]: Starting systemd-network-generator.service... Oct 2 18:51:07.540502 systemd[1]: Starting systemd-remount-fs.service... Oct 2 18:51:07.540532 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 18:51:07.540565 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 18:51:07.540595 systemd[1]: Stopped verity-setup.service. Oct 2 18:51:07.540628 systemd[1]: Mounted dev-hugepages.mount. Oct 2 18:51:07.540660 systemd[1]: Mounted dev-mqueue.mount. Oct 2 18:51:07.540691 systemd[1]: Mounted media.mount. Oct 2 18:51:07.540723 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 18:51:07.540768 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 18:51:07.540814 systemd[1]: Mounted tmp.mount. Oct 2 18:51:07.540845 systemd[1]: Finished kmod-static-nodes.service. Oct 2 18:51:07.540875 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 18:51:07.540907 systemd[1]: Finished modprobe@configfs.service. Oct 2 18:51:07.540938 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 18:51:07.540968 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 18:51:07.541003 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 18:51:07.541037 systemd[1]: Finished modprobe@drm.service. Oct 2 18:51:07.541072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 18:51:07.541103 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 18:51:07.541135 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 18:51:07.541165 systemd[1]: Finished modprobe@loop.service. Oct 2 18:51:07.541242 kernel: fuse: init (API version 7.34) Oct 2 18:51:07.541281 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 18:51:07.541312 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 18:51:07.541409 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 18:51:07.541442 systemd[1]: Finished modprobe@fuse.service. Oct 2 18:51:07.541473 systemd[1]: Finished systemd-remount-fs.service. Oct 2 18:51:07.541507 systemd[1]: Finished systemd-modules-load.service. Oct 2 18:51:07.541543 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 18:51:07.541573 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 18:51:07.541605 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 18:51:07.541636 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 18:51:07.541669 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 18:51:07.541700 systemd[1]: Starting systemd-random-seed.service... Oct 2 18:51:07.541731 systemd[1]: Starting systemd-sysctl.service... Oct 2 18:51:07.541769 systemd[1]: Finished systemd-network-generator.service. Oct 2 18:51:07.541808 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 18:51:07.541843 systemd[1]: Reached target network-pre.target. Oct 2 18:51:07.541880 systemd-journald[1484]: Journal started Oct 2 18:51:07.542043 systemd-journald[1484]: Runtime Journal (/run/log/journal/ec2dff44910014e7d53ba232516fe5c1) is 8.0M, max 75.4M, 67.4M free. Oct 2 18:51:02.222000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 18:51:07.551865 systemd[1]: Started systemd-journald.service. Oct 2 18:51:02.418000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 18:51:02.418000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 18:51:02.418000 audit: BPF prog-id=10 op=LOAD Oct 2 18:51:02.418000 audit: BPF prog-id=10 op=UNLOAD Oct 2 18:51:02.418000 audit: BPF prog-id=11 op=LOAD Oct 2 18:51:02.418000 audit: BPF prog-id=11 op=UNLOAD Oct 2 18:51:06.949000 audit: BPF prog-id=12 op=LOAD Oct 2 18:51:06.949000 audit: BPF prog-id=3 op=UNLOAD Oct 2 18:51:06.951000 audit: BPF prog-id=13 op=LOAD Oct 2 18:51:06.953000 audit: BPF prog-id=14 op=LOAD Oct 2 18:51:06.953000 audit: BPF prog-id=4 op=UNLOAD Oct 2 18:51:06.954000 audit: BPF prog-id=5 op=UNLOAD Oct 2 18:51:06.959000 audit: BPF prog-id=15 op=LOAD Oct 2 18:51:06.959000 audit: BPF prog-id=12 op=UNLOAD Oct 2 18:51:06.961000 audit: BPF prog-id=16 op=LOAD Oct 2 18:51:06.964000 audit: BPF prog-id=17 op=LOAD Oct 2 18:51:06.964000 audit: BPF prog-id=13 op=UNLOAD Oct 2 18:51:06.964000 audit: BPF prog-id=14 op=UNLOAD Oct 2 18:51:06.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:06.982000 audit: BPF prog-id=15 op=UNLOAD Oct 2 18:51:06.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:06.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.284000 audit: BPF prog-id=18 op=LOAD Oct 2 18:51:07.284000 audit: BPF prog-id=19 op=LOAD Oct 2 18:51:07.284000 audit: BPF prog-id=20 op=LOAD Oct 2 18:51:07.284000 audit: BPF prog-id=16 op=UNLOAD Oct 2 18:51:07.284000 audit: BPF prog-id=17 op=UNLOAD Oct 2 18:51:07.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.532000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 18:51:07.532000 audit[1484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc5f82cf0 a2=4000 a3=1 items=0 ppid=1 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:07.532000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 18:51:07.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:06.947097 systemd[1]: Queued start job for default target multi-user.target. Oct 2 18:51:02.615543 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 18:51:06.967137 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 18:51:02.625914 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 18:51:07.554564 systemd[1]: Starting systemd-journal-flush.service... Oct 2 18:51:02.625969 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 18:51:02.626042 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 18:51:02.626068 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 18:51:02.626140 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 18:51:02.626171 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 18:51:02.626645 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 18:51:02.626745 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 18:51:02.626782 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 18:51:02.628018 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 18:51:02.628114 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 18:51:02.628163 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 18:51:02.628240 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 18:51:02.628293 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 18:51:02.628333 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 18:51:05.982184 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 18:51:05.982894 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 18:51:05.983345 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 18:51:05.983987 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 18:51:05.984143 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 18:51:05.984378 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2023-10-02T18:51:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 18:51:07.589558 systemd[1]: Finished systemd-random-seed.service. Oct 2 18:51:07.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.591800 systemd[1]: Reached target first-boot-complete.target. Oct 2 18:51:07.598715 systemd-journald[1484]: Time spent on flushing to /var/log/journal/ec2dff44910014e7d53ba232516fe5c1 is 88.481ms for 1140 entries. Oct 2 18:51:07.598715 systemd-journald[1484]: System Journal (/var/log/journal/ec2dff44910014e7d53ba232516fe5c1) is 8.0M, max 195.6M, 187.6M free. Oct 2 18:51:07.725458 systemd-journald[1484]: Received client request to flush runtime journal. Oct 2 18:51:07.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.670871 systemd[1]: Finished systemd-sysctl.service. Oct 2 18:51:07.728327 systemd[1]: Finished systemd-journal-flush.service. Oct 2 18:51:07.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.744041 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 18:51:07.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.748577 systemd[1]: Starting systemd-udev-settle.service... Oct 2 18:51:07.781262 udevadm[1522]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 18:51:07.810337 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 18:51:07.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:07.814838 systemd[1]: Starting systemd-sysusers.service... Oct 2 18:51:07.928150 systemd[1]: Finished systemd-sysusers.service. Oct 2 18:51:07.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:08.621129 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 18:51:08.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:08.625000 audit: BPF prog-id=21 op=LOAD Oct 2 18:51:08.625000 audit: BPF prog-id=22 op=LOAD Oct 2 18:51:08.625000 audit: BPF prog-id=7 op=UNLOAD Oct 2 18:51:08.625000 audit: BPF prog-id=8 op=UNLOAD Oct 2 18:51:08.628136 systemd[1]: Starting systemd-udevd.service... Oct 2 18:51:08.678322 systemd-udevd[1533]: Using default interface naming scheme 'v252'. Oct 2 18:51:08.756552 systemd[1]: Started systemd-udevd.service. Oct 2 18:51:08.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:08.759000 audit: BPF prog-id=23 op=LOAD Oct 2 18:51:08.763589 systemd[1]: Starting systemd-networkd.service... Oct 2 18:51:08.776000 audit: BPF prog-id=24 op=LOAD Oct 2 18:51:08.777000 audit: BPF prog-id=25 op=LOAD Oct 2 18:51:08.777000 audit: BPF prog-id=26 op=LOAD Oct 2 18:51:08.779749 systemd[1]: Starting systemd-userdbd.service... Oct 2 18:51:08.874578 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 18:51:08.925665 systemd[1]: Started systemd-userdbd.service. Oct 2 18:51:08.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:08.979112 (udev-worker)[1546]: Network interface NamePolicy= disabled on kernel command line. Oct 2 18:51:09.132157 systemd-networkd[1539]: lo: Link UP Oct 2 18:51:09.132759 systemd-networkd[1539]: lo: Gained carrier Oct 2 18:51:09.133949 systemd-networkd[1539]: Enumeration completed Oct 2 18:51:09.134311 systemd[1]: Started systemd-networkd.service. Oct 2 18:51:09.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:09.139675 systemd-networkd[1539]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 18:51:09.140044 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 18:51:09.150242 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 18:51:09.150642 systemd-networkd[1539]: eth0: Link UP Oct 2 18:51:09.151238 systemd-networkd[1539]: eth0: Gained carrier Oct 2 18:51:09.172515 systemd-networkd[1539]: eth0: DHCPv4 address 172.31.28.169/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 18:51:09.262224 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1537) Oct 2 18:51:09.468944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 18:51:09.471853 systemd[1]: Finished systemd-udev-settle.service. Oct 2 18:51:09.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:09.476656 systemd[1]: Starting lvm2-activation-early.service... Oct 2 18:51:09.543320 lvm[1652]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 18:51:09.579157 systemd[1]: Finished lvm2-activation-early.service. Oct 2 18:51:09.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:09.581530 systemd[1]: Reached target cryptsetup.target. Oct 2 18:51:09.586032 systemd[1]: Starting lvm2-activation.service... Oct 2 18:51:09.600966 lvm[1653]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 18:51:09.635382 systemd[1]: Finished lvm2-activation.service. Oct 2 18:51:09.637604 systemd[1]: Reached target local-fs-pre.target. Oct 2 18:51:09.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:09.639524 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 18:51:09.639587 systemd[1]: Reached target local-fs.target. Oct 2 18:51:09.641464 systemd[1]: Reached target machines.target. Oct 2 18:51:09.645548 systemd[1]: Starting ldconfig.service... Oct 2 18:51:09.653122 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 18:51:09.653292 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 18:51:09.655810 systemd[1]: Starting systemd-boot-update.service... Oct 2 18:51:09.660066 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 18:51:09.665689 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 18:51:09.668171 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 18:51:09.668326 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 18:51:09.671828 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 18:51:09.707014 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1655 (bootctl) Oct 2 18:51:09.710273 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 18:51:09.743751 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 18:51:09.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:09.802834 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 18:51:09.807650 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 18:51:09.839049 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 18:51:09.901008 systemd-fsck[1664]: fsck.fat 4.2 (2021-01-31) Oct 2 18:51:09.901008 systemd-fsck[1664]: /dev/nvme0n1p1: 236 files, 113463/258078 clusters Oct 2 18:51:09.907330 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 18:51:09.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:09.912099 systemd[1]: Mounting boot.mount... Oct 2 18:51:09.954833 systemd[1]: Mounted boot.mount. Oct 2 18:51:09.984106 systemd[1]: Finished systemd-boot-update.service. Oct 2 18:51:09.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:10.149869 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 18:51:10.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:10.155113 systemd[1]: Starting audit-rules.service... Oct 2 18:51:10.159649 systemd[1]: Starting clean-ca-certificates.service... Oct 2 18:51:10.167693 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 18:51:10.172000 audit: BPF prog-id=27 op=LOAD Oct 2 18:51:10.179000 audit: BPF prog-id=28 op=LOAD Oct 2 18:51:10.176905 systemd[1]: Starting systemd-resolved.service... Oct 2 18:51:10.184304 systemd[1]: Starting systemd-timesyncd.service... Oct 2 18:51:10.188352 systemd[1]: Starting systemd-update-utmp.service... Oct 2 18:51:10.218140 systemd[1]: Finished clean-ca-certificates.service. Oct 2 18:51:10.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:10.220507 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 18:51:10.231000 audit[1684]: SYSTEM_BOOT pid=1684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 18:51:10.237903 systemd[1]: Finished systemd-update-utmp.service. Oct 2 18:51:10.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:10.289692 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 18:51:10.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:10.381295 systemd[1]: Started systemd-timesyncd.service. Oct 2 18:51:10.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:10.383553 systemd[1]: Reached target time-set.target. Oct 2 18:51:10.427007 systemd-resolved[1682]: Positive Trust Anchors: Oct 2 18:51:10.427708 systemd-resolved[1682]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 18:51:10.427903 systemd-resolved[1682]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 18:51:10.488000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 18:51:10.488000 audit[1699]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda93c2d0 a2=420 a3=0 items=0 ppid=1678 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:10.488000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 18:51:10.490303 augenrules[1699]: No rules Oct 2 18:51:10.492465 systemd[1]: Finished audit-rules.service. Oct 2 18:51:10.513810 systemd-resolved[1682]: Defaulting to hostname 'linux'. Oct 2 18:51:10.518003 systemd[1]: Started systemd-resolved.service. Oct 2 18:51:10.520552 systemd[1]: Reached target network.target. Oct 2 18:51:10.522416 systemd[1]: Reached target nss-lookup.target. Oct 2 18:51:10.564616 systemd-timesyncd[1683]: Contacted time server 151.204.223.236:123 (0.flatcar.pool.ntp.org). Oct 2 18:51:10.564855 systemd-timesyncd[1683]: Initial clock synchronization to Mon 2023-10-02 18:51:10.608507 UTC. Oct 2 18:51:11.163482 systemd-networkd[1539]: eth0: Gained IPv6LL Oct 2 18:51:11.168489 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 18:51:11.170909 systemd[1]: Reached target network-online.target. Oct 2 18:51:11.439370 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 18:51:11.442876 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 18:51:11.541372 ldconfig[1654]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 18:51:11.548556 systemd[1]: Finished ldconfig.service. Oct 2 18:51:11.553272 systemd[1]: Starting systemd-update-done.service... Oct 2 18:51:11.577035 systemd[1]: Finished systemd-update-done.service. Oct 2 18:51:11.579459 systemd[1]: Reached target sysinit.target. Oct 2 18:51:11.581765 systemd[1]: Started motdgen.path. Oct 2 18:51:11.583635 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 18:51:11.586805 systemd[1]: Started logrotate.timer. Oct 2 18:51:11.588794 systemd[1]: Started mdadm.timer. Oct 2 18:51:11.590365 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 18:51:11.592418 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 18:51:11.592488 systemd[1]: Reached target paths.target. Oct 2 18:51:11.594158 systemd[1]: Reached target timers.target. Oct 2 18:51:11.596883 systemd[1]: Listening on dbus.socket. Oct 2 18:51:11.600755 systemd[1]: Starting docker.socket... Oct 2 18:51:11.609878 systemd[1]: Listening on sshd.socket. Oct 2 18:51:11.611833 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 18:51:11.612842 systemd[1]: Listening on docker.socket. Oct 2 18:51:11.614877 systemd[1]: Reached target sockets.target. Oct 2 18:51:11.616673 systemd[1]: Reached target basic.target. Oct 2 18:51:11.618552 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 18:51:11.618626 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 18:51:11.621469 systemd[1]: Started amazon-ssm-agent.service. Oct 2 18:51:11.626243 systemd[1]: Starting containerd.service... Oct 2 18:51:11.631087 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 18:51:11.637027 systemd[1]: Starting dbus.service... Oct 2 18:51:11.646970 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 18:51:11.653853 systemd[1]: Starting extend-filesystems.service... Oct 2 18:51:11.655701 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 18:51:11.666059 systemd[1]: Starting motdgen.service... Oct 2 18:51:11.671299 systemd[1]: Started nvidia.service. Oct 2 18:51:11.675333 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 18:51:11.679712 systemd[1]: Starting prepare-critools.service... Oct 2 18:51:11.683860 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 18:51:11.690981 systemd[1]: Starting sshd-keygen.service... Oct 2 18:51:11.697209 systemd[1]: Starting systemd-logind.service... Oct 2 18:51:11.699389 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 18:51:11.761573 jq[1727]: true Oct 2 18:51:11.699538 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 18:51:11.700510 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 18:51:11.704765 systemd[1]: Starting update-engine.service... Oct 2 18:51:11.710617 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 18:51:11.834093 jq[1717]: false Oct 2 18:51:11.839354 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 18:51:11.839737 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 18:51:11.846690 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 18:51:11.847107 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 18:51:11.939159 jq[1733]: true Oct 2 18:51:11.946369 tar[1729]: ./ Oct 2 18:51:11.946369 tar[1729]: ./macvlan Oct 2 18:51:11.954537 tar[1731]: crictl Oct 2 18:51:12.001597 dbus-daemon[1716]: [system] SELinux support is enabled Oct 2 18:51:12.001901 systemd[1]: Started dbus.service. Oct 2 18:51:12.007221 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 18:51:12.007270 systemd[1]: Reached target system-config.target. Oct 2 18:51:12.009572 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 18:51:12.009613 systemd[1]: Reached target user-config.target. Oct 2 18:51:12.019679 dbus-daemon[1716]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1539 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 18:51:12.032514 extend-filesystems[1718]: Found nvme0n1 Oct 2 18:51:12.034785 extend-filesystems[1718]: Found nvme0n1p1 Oct 2 18:51:12.034785 extend-filesystems[1718]: Found nvme0n1p2 Oct 2 18:51:12.034785 extend-filesystems[1718]: Found nvme0n1p3 Oct 2 18:51:12.034785 extend-filesystems[1718]: Found usr Oct 2 18:51:12.034785 extend-filesystems[1718]: Found nvme0n1p4 Oct 2 18:51:12.034785 extend-filesystems[1718]: Found nvme0n1p6 Oct 2 18:51:12.034785 extend-filesystems[1718]: Found nvme0n1p7 Oct 2 18:51:12.034785 extend-filesystems[1718]: Found nvme0n1p9 Oct 2 18:51:12.034785 extend-filesystems[1718]: Checking size of /dev/nvme0n1p9 Oct 2 18:51:12.049960 dbus-daemon[1716]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 18:51:12.074863 systemd[1]: Starting systemd-hostnamed.service... Oct 2 18:51:12.179391 extend-filesystems[1718]: Resized partition /dev/nvme0n1p9 Oct 2 18:51:12.216270 extend-filesystems[1779]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 18:51:12.229427 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 18:51:12.229783 systemd[1]: Finished motdgen.service. Oct 2 18:51:12.243558 update_engine[1726]: I1002 18:51:12.235692 1726 main.cc:92] Flatcar Update Engine starting Oct 2 18:51:12.252252 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 18:51:12.267385 amazon-ssm-agent[1713]: 2023/10/02 18:51:12 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 18:51:12.280954 systemd[1]: Started update-engine.service. Oct 2 18:51:12.282757 update_engine[1726]: I1002 18:51:12.282717 1726 update_check_scheduler.cc:74] Next update check in 8m31s Oct 2 18:51:12.285951 systemd[1]: Started locksmithd.service. Oct 2 18:51:12.298616 amazon-ssm-agent[1713]: Initializing new seelog logger Oct 2 18:51:12.301965 amazon-ssm-agent[1713]: New Seelog Logger Creation Complete Oct 2 18:51:12.303407 amazon-ssm-agent[1713]: 2023/10/02 18:51:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 18:51:12.303407 amazon-ssm-agent[1713]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 18:51:12.315553 amazon-ssm-agent[1713]: 2023/10/02 18:51:12 processing appconfig overrides Oct 2 18:51:12.330916 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 18:51:12.359304 extend-filesystems[1779]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 18:51:12.359304 extend-filesystems[1779]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 18:51:12.359304 extend-filesystems[1779]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 18:51:12.367323 extend-filesystems[1718]: Resized filesystem in /dev/nvme0n1p9 Oct 2 18:51:12.380357 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 18:51:12.380732 systemd[1]: Finished extend-filesystems.service. Oct 2 18:51:12.386787 bash[1792]: Updated "/home/core/.ssh/authorized_keys" Oct 2 18:51:12.388774 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 18:51:12.432623 systemd-logind[1725]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 18:51:12.439933 systemd-logind[1725]: New seat seat0. Oct 2 18:51:12.466649 systemd[1]: Started systemd-logind.service. Oct 2 18:51:12.471055 env[1730]: time="2023-10-02T18:51:12.470971721Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 18:51:12.499156 tar[1729]: ./static Oct 2 18:51:12.545461 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 18:51:12.563706 dbus-daemon[1716]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 18:51:12.563957 systemd[1]: Started systemd-hostnamed.service. Oct 2 18:51:12.574656 dbus-daemon[1716]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1760 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 18:51:12.579450 systemd[1]: Starting polkit.service... Oct 2 18:51:12.677638 env[1730]: time="2023-10-02T18:51:12.677550141Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 18:51:12.677942 env[1730]: time="2023-10-02T18:51:12.677819443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 18:51:12.684319 polkitd[1805]: Started polkitd version 121 Oct 2 18:51:12.697901 env[1730]: time="2023-10-02T18:51:12.697817974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 18:51:12.697901 env[1730]: time="2023-10-02T18:51:12.697890581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 18:51:12.698908 env[1730]: time="2023-10-02T18:51:12.698843437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 18:51:12.699030 env[1730]: time="2023-10-02T18:51:12.698903989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 18:51:12.699030 env[1730]: time="2023-10-02T18:51:12.698939396Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 18:51:12.699030 env[1730]: time="2023-10-02T18:51:12.698964589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 18:51:12.699410 env[1730]: time="2023-10-02T18:51:12.699160057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 18:51:12.701154 env[1730]: time="2023-10-02T18:51:12.701086027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 18:51:12.701467 env[1730]: time="2023-10-02T18:51:12.701411875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 18:51:12.701556 env[1730]: time="2023-10-02T18:51:12.701466075Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 18:51:12.701658 env[1730]: time="2023-10-02T18:51:12.701612516Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 18:51:12.701850 env[1730]: time="2023-10-02T18:51:12.701654636Z" level=info msg="metadata content store policy set" policy=shared Oct 2 18:51:12.713612 env[1730]: time="2023-10-02T18:51:12.713542059Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 18:51:12.713767 env[1730]: time="2023-10-02T18:51:12.713619888Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 18:51:12.713767 env[1730]: time="2023-10-02T18:51:12.713657015Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 18:51:12.713767 env[1730]: time="2023-10-02T18:51:12.713728744Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 18:51:12.713957 env[1730]: time="2023-10-02T18:51:12.713765920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 18:51:12.713957 env[1730]: time="2023-10-02T18:51:12.713808714Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 18:51:12.713957 env[1730]: time="2023-10-02T18:51:12.713841174Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 18:51:12.714492 env[1730]: time="2023-10-02T18:51:12.714436348Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 18:51:12.714590 env[1730]: time="2023-10-02T18:51:12.714499053Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 18:51:12.714590 env[1730]: time="2023-10-02T18:51:12.714533606Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 18:51:12.714590 env[1730]: time="2023-10-02T18:51:12.714570950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 18:51:12.714759 env[1730]: time="2023-10-02T18:51:12.714603747Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 18:51:12.714999 env[1730]: time="2023-10-02T18:51:12.714826441Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 18:51:12.715084 env[1730]: time="2023-10-02T18:51:12.715019550Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 18:51:12.715686 env[1730]: time="2023-10-02T18:51:12.715634611Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 18:51:12.715779 env[1730]: time="2023-10-02T18:51:12.715702719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.715779 env[1730]: time="2023-10-02T18:51:12.715736634Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 18:51:12.715933 env[1730]: time="2023-10-02T18:51:12.715909279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.715996 env[1730]: time="2023-10-02T18:51:12.715944049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.715996 env[1730]: time="2023-10-02T18:51:12.715974764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.716095 env[1730]: time="2023-10-02T18:51:12.716003891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.716095 env[1730]: time="2023-10-02T18:51:12.716036483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.716095 env[1730]: time="2023-10-02T18:51:12.716066572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.716574 env[1730]: time="2023-10-02T18:51:12.716095940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.716574 env[1730]: time="2023-10-02T18:51:12.716129206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.716574 env[1730]: time="2023-10-02T18:51:12.716163470Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 18:51:12.716574 env[1730]: time="2023-10-02T18:51:12.716478189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.716574 env[1730]: time="2023-10-02T18:51:12.716516050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.716574 env[1730]: time="2023-10-02T18:51:12.716547331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.716879 env[1730]: time="2023-10-02T18:51:12.716576374Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 18:51:12.716879 env[1730]: time="2023-10-02T18:51:12.716609977Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 18:51:12.716879 env[1730]: time="2023-10-02T18:51:12.716636421Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 18:51:12.716879 env[1730]: time="2023-10-02T18:51:12.716670444Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 18:51:12.716879 env[1730]: time="2023-10-02T18:51:12.716734558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 18:51:12.717271 env[1730]: time="2023-10-02T18:51:12.717127081Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 18:51:12.718716 env[1730]: time="2023-10-02T18:51:12.717274280Z" level=info msg="Connect containerd service" Oct 2 18:51:12.718716 env[1730]: time="2023-10-02T18:51:12.717494989Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 18:51:12.722991 env[1730]: time="2023-10-02T18:51:12.722367159Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 18:51:12.724609 env[1730]: time="2023-10-02T18:51:12.724516089Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 18:51:12.724736 env[1730]: time="2023-10-02T18:51:12.724645747Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 18:51:12.725138 systemd[1]: Started containerd.service. Oct 2 18:51:12.733336 env[1730]: time="2023-10-02T18:51:12.733269809Z" level=info msg="containerd successfully booted in 0.263691s" Oct 2 18:51:12.738447 env[1730]: time="2023-10-02T18:51:12.738357611Z" level=info msg="Start subscribing containerd event" Oct 2 18:51:12.738629 env[1730]: time="2023-10-02T18:51:12.738463905Z" level=info msg="Start recovering state" Oct 2 18:51:12.738629 env[1730]: time="2023-10-02T18:51:12.738582855Z" level=info msg="Start event monitor" Oct 2 18:51:12.740020 env[1730]: time="2023-10-02T18:51:12.739949865Z" level=info msg="Start snapshots syncer" Oct 2 18:51:12.740020 env[1730]: time="2023-10-02T18:51:12.740009046Z" level=info msg="Start cni network conf syncer for default" Oct 2 18:51:12.740254 env[1730]: time="2023-10-02T18:51:12.740034492Z" level=info msg="Start streaming server" Oct 2 18:51:12.761638 polkitd[1805]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 18:51:12.772347 tar[1729]: ./vlan Oct 2 18:51:12.772708 polkitd[1805]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 18:51:12.778853 polkitd[1805]: Finished loading, compiling and executing 2 rules Oct 2 18:51:12.782842 dbus-daemon[1716]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 18:51:12.783131 systemd[1]: Started polkit.service. Oct 2 18:51:12.789801 polkitd[1805]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 18:51:12.862869 systemd-hostnamed[1760]: Hostname set to (transient) Oct 2 18:51:12.863186 systemd-resolved[1682]: System hostname changed to 'ip-172-31-28-169'. Oct 2 18:51:12.926839 amazon-ssm-agent[1713]: 2023-10-02 18:51:12 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-061baf47f2b45a445 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-061baf47f2b45a445 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 18:51:12.926839 amazon-ssm-agent[1713]: status code: 400, request id: 3c46e653-b24e-4037-a100-107df02bba63 Oct 2 18:51:12.927266 amazon-ssm-agent[1713]: 2023-10-02 18:51:12 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 18:51:12.990867 tar[1729]: ./portmap Oct 2 18:51:13.061095 coreos-metadata[1715]: Oct 02 18:51:13.060 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 18:51:13.065350 coreos-metadata[1715]: Oct 02 18:51:13.065 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 18:51:13.068973 coreos-metadata[1715]: Oct 02 18:51:13.068 INFO Fetch successful Oct 2 18:51:13.069319 coreos-metadata[1715]: Oct 02 18:51:13.069 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 18:51:13.071289 coreos-metadata[1715]: Oct 02 18:51:13.071 INFO Fetch successful Oct 2 18:51:13.074441 unknown[1715]: wrote ssh authorized keys file for user: core Oct 2 18:51:13.109739 update-ssh-keys[1867]: Updated "/home/core/.ssh/authorized_keys" Oct 2 18:51:13.110890 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 18:51:13.141780 tar[1729]: ./host-local Oct 2 18:51:13.238398 tar[1729]: ./vrf Oct 2 18:51:13.350148 tar[1729]: ./bridge Oct 2 18:51:13.506986 tar[1729]: ./tuning Oct 2 18:51:13.652228 tar[1729]: ./firewall Oct 2 18:51:13.681831 systemd[1]: Finished prepare-critools.service. Oct 2 18:51:13.747983 tar[1729]: ./host-device Oct 2 18:51:13.806935 tar[1729]: ./sbr Oct 2 18:51:13.859226 tar[1729]: ./loopback Oct 2 18:51:13.910290 tar[1729]: ./dhcp Oct 2 18:51:14.053422 tar[1729]: ./ptp Oct 2 18:51:14.115624 tar[1729]: ./ipvlan Oct 2 18:51:14.176666 tar[1729]: ./bandwidth Oct 2 18:51:14.262611 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 18:51:14.389645 locksmithd[1787]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 18:51:17.499549 sshd_keygen[1753]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 18:51:17.560061 systemd[1]: Finished sshd-keygen.service. Oct 2 18:51:17.564992 systemd[1]: Starting issuegen.service... Oct 2 18:51:17.587287 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 18:51:17.587646 systemd[1]: Finished issuegen.service. Oct 2 18:51:17.592480 systemd[1]: Starting systemd-user-sessions.service... Oct 2 18:51:17.615812 systemd[1]: Finished systemd-user-sessions.service. Oct 2 18:51:17.620818 systemd[1]: Started getty@tty1.service. Oct 2 18:51:17.626428 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 18:51:17.628847 systemd[1]: Reached target getty.target. Oct 2 18:51:17.630793 systemd[1]: Reached target multi-user.target. Oct 2 18:51:17.635737 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 18:51:17.659861 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 18:51:17.660235 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 18:51:17.662579 systemd[1]: Startup finished in 1.191s (kernel) + 12.475s (initrd) + 15.583s (userspace) = 29.250s. Oct 2 18:51:20.501933 systemd[1]: Created slice system-sshd.slice. Oct 2 18:51:20.504425 systemd[1]: Started sshd@0-172.31.28.169:22-139.178.89.65:58414.service. Oct 2 18:51:20.711059 sshd[1925]: Accepted publickey for core from 139.178.89.65 port 58414 ssh2: RSA SHA256:ePkK8jKoGlhN3AxcTQ2G+RQZHD5kDZhw675IJmRySH8 Oct 2 18:51:20.717270 sshd[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 18:51:20.739655 systemd[1]: Created slice user-500.slice. Oct 2 18:51:20.743996 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 18:51:20.751074 systemd-logind[1725]: New session 1 of user core. Oct 2 18:51:20.775187 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 18:51:20.779144 systemd[1]: Starting user@500.service... Oct 2 18:51:20.791153 (systemd)[1928]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 18:51:21.014848 systemd[1928]: Queued start job for default target default.target. Oct 2 18:51:21.016160 systemd[1928]: Reached target paths.target. Oct 2 18:51:21.016259 systemd[1928]: Reached target sockets.target. Oct 2 18:51:21.016298 systemd[1928]: Reached target timers.target. Oct 2 18:51:21.016329 systemd[1928]: Reached target basic.target. Oct 2 18:51:21.016446 systemd[1928]: Reached target default.target. Oct 2 18:51:21.016523 systemd[1928]: Startup finished in 205ms. Oct 2 18:51:21.017681 systemd[1]: Started user@500.service. Oct 2 18:51:21.021650 systemd[1]: Started session-1.scope. Oct 2 18:51:21.190842 systemd[1]: Started sshd@1-172.31.28.169:22-139.178.89.65:58422.service. Oct 2 18:51:21.369471 sshd[1937]: Accepted publickey for core from 139.178.89.65 port 58422 ssh2: RSA SHA256:ePkK8jKoGlhN3AxcTQ2G+RQZHD5kDZhw675IJmRySH8 Oct 2 18:51:21.373096 sshd[1937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 18:51:21.383310 systemd-logind[1725]: New session 2 of user core. Oct 2 18:51:21.383950 systemd[1]: Started session-2.scope. Oct 2 18:51:21.535821 sshd[1937]: pam_unix(sshd:session): session closed for user core Oct 2 18:51:21.542331 systemd[1]: sshd@1-172.31.28.169:22-139.178.89.65:58422.service: Deactivated successfully. Oct 2 18:51:21.543594 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 18:51:21.544776 systemd-logind[1725]: Session 2 logged out. Waiting for processes to exit. Oct 2 18:51:21.546462 systemd-logind[1725]: Removed session 2. Oct 2 18:51:21.565369 systemd[1]: Started sshd@2-172.31.28.169:22-139.178.89.65:58434.service. Oct 2 18:51:21.740051 sshd[1943]: Accepted publickey for core from 139.178.89.65 port 58434 ssh2: RSA SHA256:ePkK8jKoGlhN3AxcTQ2G+RQZHD5kDZhw675IJmRySH8 Oct 2 18:51:21.743402 sshd[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 18:51:21.751912 systemd-logind[1725]: New session 3 of user core. Oct 2 18:51:21.752917 systemd[1]: Started session-3.scope. Oct 2 18:51:21.887277 sshd[1943]: pam_unix(sshd:session): session closed for user core Oct 2 18:51:21.894699 systemd[1]: sshd@2-172.31.28.169:22-139.178.89.65:58434.service: Deactivated successfully. Oct 2 18:51:21.896150 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 18:51:21.897733 systemd-logind[1725]: Session 3 logged out. Waiting for processes to exit. Oct 2 18:51:21.899438 systemd-logind[1725]: Removed session 3. Oct 2 18:51:21.920049 systemd[1]: Started sshd@3-172.31.28.169:22-139.178.89.65:58446.service. Oct 2 18:51:22.097222 sshd[1949]: Accepted publickey for core from 139.178.89.65 port 58446 ssh2: RSA SHA256:ePkK8jKoGlhN3AxcTQ2G+RQZHD5kDZhw675IJmRySH8 Oct 2 18:51:22.101312 sshd[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 18:51:22.110505 systemd-logind[1725]: New session 4 of user core. Oct 2 18:51:22.111668 systemd[1]: Started session-4.scope. Oct 2 18:51:22.263905 sshd[1949]: pam_unix(sshd:session): session closed for user core Oct 2 18:51:22.271471 systemd[1]: sshd@3-172.31.28.169:22-139.178.89.65:58446.service: Deactivated successfully. Oct 2 18:51:22.272887 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 18:51:22.274336 systemd-logind[1725]: Session 4 logged out. Waiting for processes to exit. Oct 2 18:51:22.276921 systemd-logind[1725]: Removed session 4. Oct 2 18:51:22.296498 systemd[1]: Started sshd@4-172.31.28.169:22-139.178.89.65:58462.service. Oct 2 18:51:22.482335 sshd[1955]: Accepted publickey for core from 139.178.89.65 port 58462 ssh2: RSA SHA256:ePkK8jKoGlhN3AxcTQ2G+RQZHD5kDZhw675IJmRySH8 Oct 2 18:51:22.486340 sshd[1955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 18:51:22.495462 systemd-logind[1725]: New session 5 of user core. Oct 2 18:51:22.495768 systemd[1]: Started session-5.scope. Oct 2 18:51:22.627981 sudo[1958]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 18:51:22.628515 sudo[1958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 18:51:22.643487 dbus-daemon[1716]: avc: received setenforce notice (enforcing=1) Oct 2 18:51:22.647104 sudo[1958]: pam_unix(sudo:session): session closed for user root Oct 2 18:51:22.672705 sshd[1955]: pam_unix(sshd:session): session closed for user core Oct 2 18:51:22.678511 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 18:51:22.680328 systemd-logind[1725]: Session 5 logged out. Waiting for processes to exit. Oct 2 18:51:22.680366 systemd[1]: sshd@4-172.31.28.169:22-139.178.89.65:58462.service: Deactivated successfully. Oct 2 18:51:22.683894 systemd-logind[1725]: Removed session 5. Oct 2 18:51:22.703570 systemd[1]: Started sshd@5-172.31.28.169:22-139.178.89.65:58464.service. Oct 2 18:51:22.884553 sshd[1962]: Accepted publickey for core from 139.178.89.65 port 58464 ssh2: RSA SHA256:ePkK8jKoGlhN3AxcTQ2G+RQZHD5kDZhw675IJmRySH8 Oct 2 18:51:22.887742 sshd[1962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 18:51:22.896398 systemd-logind[1725]: New session 6 of user core. Oct 2 18:51:22.897324 systemd[1]: Started session-6.scope. Oct 2 18:51:23.019671 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 18:51:23.020243 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 18:51:23.028321 sudo[1966]: pam_unix(sudo:session): session closed for user root Oct 2 18:51:23.043031 sudo[1965]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 18:51:23.043618 sudo[1965]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 18:51:23.069976 systemd[1]: Stopping audit-rules.service... Oct 2 18:51:23.073000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 18:51:23.076787 kernel: kauditd_printk_skb: 71 callbacks suppressed Oct 2 18:51:23.076883 kernel: audit: type=1305 audit(1696272683.073:159): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 18:51:23.077493 auditctl[1969]: No rules Oct 2 18:51:23.083140 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 18:51:23.073000 audit[1969]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcc11fd20 a2=420 a3=0 items=0 ppid=1 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:23.095693 kernel: audit: type=1300 audit(1696272683.073:159): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcc11fd20 a2=420 a3=0 items=0 ppid=1 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:23.083616 systemd[1]: Stopped audit-rules.service. Oct 2 18:51:23.086907 systemd[1]: Starting audit-rules.service... Oct 2 18:51:23.073000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 18:51:23.100948 kernel: audit: type=1327 audit(1696272683.073:159): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 18:51:23.101039 kernel: audit: type=1131 audit(1696272683.082:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.160663 augenrules[1986]: No rules Oct 2 18:51:23.162712 systemd[1]: Finished audit-rules.service. Oct 2 18:51:23.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.165412 sudo[1965]: pam_unix(sudo:session): session closed for user root Oct 2 18:51:23.163000 audit[1965]: USER_END pid=1965 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.183928 kernel: audit: type=1130 audit(1696272683.161:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.184089 kernel: audit: type=1106 audit(1696272683.163:162): pid=1965 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.184137 kernel: audit: type=1104 audit(1696272683.163:163): pid=1965 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.163000 audit[1965]: CRED_DISP pid=1965 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.196063 sshd[1962]: pam_unix(sshd:session): session closed for user core Oct 2 18:51:23.196000 audit[1962]: USER_END pid=1962 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:23.197000 audit[1962]: CRED_DISP pid=1962 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:23.216286 kernel: audit: type=1106 audit(1696272683.196:164): pid=1962 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:23.216402 kernel: audit: type=1104 audit(1696272683.197:165): pid=1962 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:23.214162 systemd-logind[1725]: Session 6 logged out. Waiting for processes to exit. Oct 2 18:51:23.214653 systemd[1]: sshd@5-172.31.28.169:22-139.178.89.65:58464.service: Deactivated successfully. Oct 2 18:51:23.216091 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 18:51:23.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.28.169:22-139.178.89.65:58464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.236175 kernel: audit: type=1131 audit(1696272683.213:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.28.169:22-139.178.89.65:58464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.236692 systemd-logind[1725]: Removed session 6. Oct 2 18:51:23.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.169:22-139.178.89.65:58468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.240868 systemd[1]: Started sshd@6-172.31.28.169:22-139.178.89.65:58468.service. Oct 2 18:51:23.418000 audit[1992]: USER_ACCT pid=1992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:23.420331 sshd[1992]: Accepted publickey for core from 139.178.89.65 port 58468 ssh2: RSA SHA256:ePkK8jKoGlhN3AxcTQ2G+RQZHD5kDZhw675IJmRySH8 Oct 2 18:51:23.422000 audit[1992]: CRED_ACQ pid=1992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:23.423000 audit[1992]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe700ef60 a2=3 a3=1 items=0 ppid=1 pid=1992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:23.423000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 18:51:23.424656 sshd[1992]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 18:51:23.435690 systemd-logind[1725]: New session 7 of user core. Oct 2 18:51:23.435891 systemd[1]: Started session-7.scope. Oct 2 18:51:23.448000 audit[1992]: USER_START pid=1992 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:23.455000 audit[1994]: CRED_ACQ pid=1994 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:23.560000 audit[1995]: USER_ACCT pid=1995 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.561685 sudo[1995]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 18:51:23.560000 audit[1995]: CRED_REFR pid=1995 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:23.562258 sudo[1995]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 18:51:23.565000 audit[1995]: USER_START pid=1995 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:24.270816 systemd[1]: Reloading. Oct 2 18:51:24.481384 /usr/lib/systemd/system-generators/torcx-generator[2027]: time="2023-10-02T18:51:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 18:51:24.488356 /usr/lib/systemd/system-generators/torcx-generator[2027]: time="2023-10-02T18:51:24Z" level=info msg="torcx already run" Oct 2 18:51:24.741429 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 18:51:24.741470 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 18:51:24.780657 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 18:51:24.936000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.936000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.936000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.936000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.936000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.936000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.936000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.936000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.936000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.937000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.937000 audit: BPF prog-id=37 op=LOAD Oct 2 18:51:24.937000 audit: BPF prog-id=23 op=UNLOAD Oct 2 18:51:24.943000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.943000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.943000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.943000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.943000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.943000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.943000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.943000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.943000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit: BPF prog-id=38 op=LOAD Oct 2 18:51:24.944000 audit: BPF prog-id=29 op=UNLOAD Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit: BPF prog-id=39 op=LOAD Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.944000 audit: BPF prog-id=40 op=LOAD Oct 2 18:51:24.944000 audit: BPF prog-id=30 op=UNLOAD Oct 2 18:51:24.944000 audit: BPF prog-id=31 op=UNLOAD Oct 2 18:51:24.949000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.949000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.949000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.949000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.949000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.949000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.949000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.949000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.949000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit: BPF prog-id=41 op=LOAD Oct 2 18:51:24.950000 audit: BPF prog-id=32 op=UNLOAD Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit: BPF prog-id=42 op=LOAD Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.950000 audit: BPF prog-id=43 op=LOAD Oct 2 18:51:24.950000 audit: BPF prog-id=33 op=UNLOAD Oct 2 18:51:24.950000 audit: BPF prog-id=34 op=UNLOAD Oct 2 18:51:24.951000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.951000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.951000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.951000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.951000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.951000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.951000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.951000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.951000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.951000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.952000 audit: BPF prog-id=44 op=LOAD Oct 2 18:51:24.952000 audit: BPF prog-id=28 op=UNLOAD Oct 2 18:51:24.952000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.952000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit: BPF prog-id=45 op=LOAD Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.953000 audit: BPF prog-id=46 op=LOAD Oct 2 18:51:24.953000 audit: BPF prog-id=21 op=UNLOAD Oct 2 18:51:24.953000 audit: BPF prog-id=22 op=UNLOAD Oct 2 18:51:24.954000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.954000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.954000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.954000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit: BPF prog-id=47 op=LOAD Oct 2 18:51:24.955000 audit: BPF prog-id=24 op=UNLOAD Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit: BPF prog-id=48 op=LOAD Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.955000 audit: BPF prog-id=49 op=LOAD Oct 2 18:51:24.955000 audit: BPF prog-id=25 op=UNLOAD Oct 2 18:51:24.955000 audit: BPF prog-id=26 op=UNLOAD Oct 2 18:51:24.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit: BPF prog-id=50 op=LOAD Oct 2 18:51:24.959000 audit: BPF prog-id=18 op=UNLOAD Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit: BPF prog-id=51 op=LOAD Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.959000 audit: BPF prog-id=52 op=LOAD Oct 2 18:51:24.959000 audit: BPF prog-id=19 op=UNLOAD Oct 2 18:51:24.959000 audit: BPF prog-id=20 op=UNLOAD Oct 2 18:51:24.961000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.961000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.961000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.961000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.961000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.961000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.961000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.961000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.961000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.961000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.961000 audit: BPF prog-id=53 op=LOAD Oct 2 18:51:24.961000 audit: BPF prog-id=35 op=UNLOAD Oct 2 18:51:24.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.963000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:24.963000 audit: BPF prog-id=54 op=LOAD Oct 2 18:51:24.963000 audit: BPF prog-id=27 op=UNLOAD Oct 2 18:51:24.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:24.983561 systemd[1]: Started kubelet.service. Oct 2 18:51:25.016262 systemd[1]: Starting coreos-metadata.service... Oct 2 18:51:25.178321 kubelet[2079]: E1002 18:51:25.178174 2079 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 18:51:25.182910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 18:51:25.183306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 18:51:25.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 18:51:25.228595 coreos-metadata[2082]: Oct 02 18:51:25.228 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 18:51:25.229679 coreos-metadata[2082]: Oct 02 18:51:25.229 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 18:51:25.230382 coreos-metadata[2082]: Oct 02 18:51:25.230 INFO Fetch successful Oct 2 18:51:25.230484 coreos-metadata[2082]: Oct 02 18:51:25.230 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 18:51:25.231103 coreos-metadata[2082]: Oct 02 18:51:25.231 INFO Fetch successful Oct 2 18:51:25.231258 coreos-metadata[2082]: Oct 02 18:51:25.231 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 18:51:25.231886 coreos-metadata[2082]: Oct 02 18:51:25.231 INFO Fetch successful Oct 2 18:51:25.231983 coreos-metadata[2082]: Oct 02 18:51:25.231 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 18:51:25.232638 coreos-metadata[2082]: Oct 02 18:51:25.232 INFO Fetch successful Oct 2 18:51:25.232727 coreos-metadata[2082]: Oct 02 18:51:25.232 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 18:51:25.233395 coreos-metadata[2082]: Oct 02 18:51:25.233 INFO Fetch successful Oct 2 18:51:25.233510 coreos-metadata[2082]: Oct 02 18:51:25.233 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 18:51:25.234109 coreos-metadata[2082]: Oct 02 18:51:25.234 INFO Fetch successful Oct 2 18:51:25.234241 coreos-metadata[2082]: Oct 02 18:51:25.234 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 18:51:25.234835 coreos-metadata[2082]: Oct 02 18:51:25.234 INFO Fetch successful Oct 2 18:51:25.234921 coreos-metadata[2082]: Oct 02 18:51:25.234 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 18:51:25.235714 coreos-metadata[2082]: Oct 02 18:51:25.235 INFO Fetch successful Oct 2 18:51:25.257954 systemd[1]: Finished coreos-metadata.service. Oct 2 18:51:25.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:25.806587 systemd[1]: Stopped kubelet.service. Oct 2 18:51:25.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:25.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:25.849137 systemd[1]: Reloading. Oct 2 18:51:26.049277 /usr/lib/systemd/system-generators/torcx-generator[2145]: time="2023-10-02T18:51:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 18:51:26.049368 /usr/lib/systemd/system-generators/torcx-generator[2145]: time="2023-10-02T18:51:26Z" level=info msg="torcx already run" Oct 2 18:51:26.304235 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 18:51:26.304275 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 18:51:26.343269 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 18:51:26.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.507000 audit: BPF prog-id=55 op=LOAD Oct 2 18:51:26.507000 audit: BPF prog-id=37 op=UNLOAD Oct 2 18:51:26.513000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.513000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.513000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.513000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.513000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.513000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.513000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.513000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.513000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit: BPF prog-id=56 op=LOAD Oct 2 18:51:26.514000 audit: BPF prog-id=38 op=UNLOAD Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit: BPF prog-id=57 op=LOAD Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.514000 audit: BPF prog-id=58 op=LOAD Oct 2 18:51:26.514000 audit: BPF prog-id=39 op=UNLOAD Oct 2 18:51:26.514000 audit: BPF prog-id=40 op=UNLOAD Oct 2 18:51:26.519000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.519000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.519000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.519000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.519000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.519000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.519000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.519000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.519000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit: BPF prog-id=59 op=LOAD Oct 2 18:51:26.520000 audit: BPF prog-id=41 op=UNLOAD Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit: BPF prog-id=60 op=LOAD Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.520000 audit: BPF prog-id=61 op=LOAD Oct 2 18:51:26.520000 audit: BPF prog-id=42 op=UNLOAD Oct 2 18:51:26.520000 audit: BPF prog-id=43 op=UNLOAD Oct 2 18:51:26.522000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.522000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.522000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.522000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.522000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.522000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.522000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.522000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.522000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.522000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.522000 audit: BPF prog-id=62 op=LOAD Oct 2 18:51:26.522000 audit: BPF prog-id=44 op=UNLOAD Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit: BPF prog-id=63 op=LOAD Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.523000 audit: BPF prog-id=64 op=LOAD Oct 2 18:51:26.523000 audit: BPF prog-id=45 op=UNLOAD Oct 2 18:51:26.523000 audit: BPF prog-id=46 op=UNLOAD Oct 2 18:51:26.525000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.525000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.525000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.525000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.525000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.525000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.525000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.525000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.525000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.525000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.525000 audit: BPF prog-id=65 op=LOAD Oct 2 18:51:26.526000 audit: BPF prog-id=47 op=UNLOAD Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit: BPF prog-id=66 op=LOAD Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.526000 audit: BPF prog-id=67 op=LOAD Oct 2 18:51:26.526000 audit: BPF prog-id=48 op=UNLOAD Oct 2 18:51:26.526000 audit: BPF prog-id=49 op=UNLOAD Oct 2 18:51:26.529000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.529000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.529000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.529000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit: BPF prog-id=68 op=LOAD Oct 2 18:51:26.530000 audit: BPF prog-id=50 op=UNLOAD Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit: BPF prog-id=69 op=LOAD Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.530000 audit: BPF prog-id=70 op=LOAD Oct 2 18:51:26.530000 audit: BPF prog-id=51 op=UNLOAD Oct 2 18:51:26.530000 audit: BPF prog-id=52 op=UNLOAD Oct 2 18:51:26.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.532000 audit: BPF prog-id=71 op=LOAD Oct 2 18:51:26.532000 audit: BPF prog-id=53 op=UNLOAD Oct 2 18:51:26.533000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.533000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.533000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.533000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.533000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.533000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.533000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.533000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.533000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.534000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:26.534000 audit: BPF prog-id=72 op=LOAD Oct 2 18:51:26.534000 audit: BPF prog-id=54 op=UNLOAD Oct 2 18:51:26.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:26.582885 systemd[1]: Started kubelet.service. Oct 2 18:51:26.720159 kubelet[2198]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 18:51:26.720753 kubelet[2198]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 18:51:26.720871 kubelet[2198]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 18:51:26.721184 kubelet[2198]: I1002 18:51:26.721120 2198 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 18:51:26.723724 kubelet[2198]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 18:51:26.723897 kubelet[2198]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 18:51:26.724019 kubelet[2198]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 18:51:27.684372 kubelet[2198]: I1002 18:51:27.684333 2198 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 18:51:27.684782 kubelet[2198]: I1002 18:51:27.684739 2198 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 18:51:27.685316 kubelet[2198]: I1002 18:51:27.685285 2198 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 18:51:27.690532 kubelet[2198]: I1002 18:51:27.690485 2198 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 18:51:27.693779 kubelet[2198]: W1002 18:51:27.693746 2198 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 18:51:27.695050 kubelet[2198]: I1002 18:51:27.695022 2198 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 18:51:27.695629 kubelet[2198]: I1002 18:51:27.695606 2198 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 18:51:27.695859 kubelet[2198]: I1002 18:51:27.695834 2198 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 18:51:27.696257 kubelet[2198]: I1002 18:51:27.696233 2198 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 18:51:27.696373 kubelet[2198]: I1002 18:51:27.696353 2198 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 18:51:27.696655 kubelet[2198]: I1002 18:51:27.696631 2198 state_mem.go:36] "Initialized new in-memory state store" Oct 2 18:51:27.703847 kubelet[2198]: I1002 18:51:27.703810 2198 kubelet.go:381] "Attempting to sync node with API server" Oct 2 18:51:27.704062 kubelet[2198]: I1002 18:51:27.704040 2198 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 18:51:27.704216 kubelet[2198]: I1002 18:51:27.704169 2198 kubelet.go:281] "Adding apiserver pod source" Oct 2 18:51:27.704341 kubelet[2198]: I1002 18:51:27.704321 2198 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 18:51:27.706121 kubelet[2198]: E1002 18:51:27.706073 2198 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:27.706468 kubelet[2198]: E1002 18:51:27.706445 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:27.708185 kubelet[2198]: I1002 18:51:27.708151 2198 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 18:51:27.709530 kubelet[2198]: W1002 18:51:27.709502 2198 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 18:51:27.710855 kubelet[2198]: I1002 18:51:27.710817 2198 server.go:1175] "Started kubelet" Oct 2 18:51:27.711000 audit[2198]: AVC avc: denied { mac_admin } for pid=2198 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:27.711000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 18:51:27.711000 audit[2198]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009396b0 a1=400081ed98 a2=4000939680 a3=25 items=0 ppid=1 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.711000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 18:51:27.711000 audit[2198]: AVC avc: denied { mac_admin } for pid=2198 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:27.711000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 18:51:27.711000 audit[2198]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000828f80 a1=400081edb0 a2=4000939740 a3=25 items=0 ppid=1 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.711000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 18:51:27.713840 kubelet[2198]: I1002 18:51:27.713001 2198 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 18:51:27.713840 kubelet[2198]: I1002 18:51:27.713081 2198 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 18:51:27.713840 kubelet[2198]: I1002 18:51:27.713400 2198 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 18:51:27.718442 kubelet[2198]: I1002 18:51:27.718365 2198 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 18:51:27.739698 kubelet[2198]: E1002 18:51:27.739647 2198 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 18:51:27.740345 kubelet[2198]: E1002 18:51:27.739718 2198 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 18:51:27.740345 kubelet[2198]: I1002 18:51:27.730922 2198 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 18:51:27.740345 kubelet[2198]: I1002 18:51:27.730947 2198 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 18:51:27.743282 kubelet[2198]: E1002 18:51:27.743248 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:51:27.748279 kubelet[2198]: I1002 18:51:27.745348 2198 server.go:438] "Adding debug handlers to kubelet server" Oct 2 18:51:27.762316 kubelet[2198]: E1002 18:51:27.762274 2198 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.28.169" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 18:51:27.778000 audit[2214]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.778000 audit[2214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe05084b0 a2=0 a3=1 items=0 ppid=2198 pid=2214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 18:51:27.780524 kubelet[2198]: W1002 18:51:27.777976 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 18:51:27.780710 kubelet[2198]: E1002 18:51:27.780684 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 18:51:27.780871 kubelet[2198]: W1002 18:51:27.778590 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 18:51:27.781014 kubelet[2198]: E1002 18:51:27.780993 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 18:51:27.781118 kubelet[2198]: W1002 18:51:27.778936 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.28.169" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 18:51:27.781261 kubelet[2198]: E1002 18:51:27.781240 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.28.169" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 18:51:27.781377 kubelet[2198]: E1002 18:51:27.780030 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b132ad63", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 710776675, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 710776675, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:27.787925 kubelet[2198]: E1002 18:51:27.787797 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b2ec08c9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 739701449, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 739701449, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:27.796000 audit[2218]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.796000 audit[2218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffd1a36320 a2=0 a3=1 items=0 ppid=2198 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 18:51:27.823527 kubelet[2198]: E1002 18:51:27.823406 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d14733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.28.169 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821834035, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821834035, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:27.824280 kubelet[2198]: I1002 18:51:27.824251 2198 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 18:51:27.824445 kubelet[2198]: I1002 18:51:27.824424 2198 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 18:51:27.824579 kubelet[2198]: I1002 18:51:27.824558 2198 state_mem.go:36] "Initialized new in-memory state store" Oct 2 18:51:27.825667 kubelet[2198]: E1002 18:51:27.825459 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d187f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.28.169 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821850611, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821850611, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:27.827351 kubelet[2198]: E1002 18:51:27.827243 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d19d59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.28.169 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821856089, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821856089, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:27.827699 kubelet[2198]: I1002 18:51:27.827671 2198 policy_none.go:49] "None policy: Start" Oct 2 18:51:27.828907 kubelet[2198]: I1002 18:51:27.828876 2198 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 18:51:27.829108 kubelet[2198]: I1002 18:51:27.829086 2198 state_mem.go:35] "Initializing new in-memory state store" Oct 2 18:51:27.831076 kubelet[2198]: E1002 18:51:27.831038 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:27.833118 kubelet[2198]: I1002 18:51:27.833084 2198 kubelet_node_status.go:70] "Attempting to register node" node="172.31.28.169" Oct 2 18:51:27.835131 kubelet[2198]: E1002 18:51:27.835079 2198 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.28.169" Oct 2 18:51:27.835890 kubelet[2198]: E1002 18:51:27.835767 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d14733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.28.169 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821834035, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 833033369, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d14733" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:27.838634 kubelet[2198]: E1002 18:51:27.837899 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d187f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.28.169 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821850611, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 833039976, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d187f3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:27.840416 kubelet[2198]: E1002 18:51:27.839857 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d19d59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.28.169 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821856089, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 833047027, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d19d59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:27.840554 systemd[1]: Created slice kubepods.slice. Oct 2 18:51:27.814000 audit[2220]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2220 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.814000 audit[2220]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffa94f260 a2=0 a3=1 items=0 ppid=2198 pid=2220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 18:51:27.851477 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 18:51:27.858000 audit[2225]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.858000 audit[2225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcfc84580 a2=0 a3=1 items=0 ppid=2198 pid=2225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 18:51:27.876620 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 18:51:27.885056 kubelet[2198]: I1002 18:51:27.885014 2198 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 18:51:27.884000 audit[2198]: AVC avc: denied { mac_admin } for pid=2198 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:27.884000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 18:51:27.884000 audit[2198]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000edac00 a1=40009ce048 a2=4000edaba0 a3=25 items=0 ppid=1 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.884000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 18:51:27.885667 kubelet[2198]: I1002 18:51:27.885132 2198 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 18:51:27.885667 kubelet[2198]: I1002 18:51:27.885554 2198 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 18:51:27.889653 kubelet[2198]: E1002 18:51:27.889615 2198 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.28.169\" not found" Oct 2 18:51:27.891815 kubelet[2198]: E1002 18:51:27.891620 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05bbe06f20", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 889936160, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 889936160, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:27.931333 kubelet[2198]: E1002 18:51:27.931290 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:27.934000 audit[2230]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.934000 audit[2230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffffcd819f0 a2=0 a3=1 items=0 ppid=2198 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 18:51:27.938000 audit[2231]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.938000 audit[2231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe82f0d00 a2=0 a3=1 items=0 ppid=2198 pid=2231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.938000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 18:51:27.953000 audit[2234]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.953000 audit[2234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcd9b3740 a2=0 a3=1 items=0 ppid=2198 pid=2234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 18:51:27.968000 audit[2237]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.968000 audit[2237]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc54e77b0 a2=0 a3=1 items=0 ppid=2198 pid=2237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 18:51:27.973000 audit[2238]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.973000 audit[2238]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd1aa2070 a2=0 a3=1 items=0 ppid=2198 pid=2238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 18:51:27.976000 audit[2239]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.976000 audit[2239]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8b0d6c0 a2=0 a3=1 items=0 ppid=2198 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 18:51:27.982446 kubelet[2198]: E1002 18:51:27.982382 2198 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.28.169" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 18:51:27.986000 audit[2241]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.986000 audit[2241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc32cf110 a2=0 a3=1 items=0 ppid=2198 pid=2241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 18:51:27.994000 audit[2243]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:27.994000 audit[2243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe13f84a0 a2=0 a3=1 items=0 ppid=2198 pid=2243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:27.994000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 18:51:28.031575 kubelet[2198]: E1002 18:51:28.031518 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:28.033000 audit[2246]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:28.033000 audit[2246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffcc085c50 a2=0 a3=1 items=0 ppid=2198 pid=2246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.033000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 18:51:28.036662 kubelet[2198]: I1002 18:51:28.036614 2198 kubelet_node_status.go:70] "Attempting to register node" node="172.31.28.169" Oct 2 18:51:28.038126 kubelet[2198]: E1002 18:51:28.038045 2198 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.28.169" Oct 2 18:51:28.038377 kubelet[2198]: E1002 18:51:28.038253 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d14733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.28.169 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821834035, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 28, 36557533, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d14733" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:28.039818 kubelet[2198]: E1002 18:51:28.039632 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d187f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.28.169 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821850611, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 28, 36565988, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d187f3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:28.043000 audit[2248]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:28.043000 audit[2248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffe762fd60 a2=0 a3=1 items=0 ppid=2198 pid=2248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.043000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 18:51:28.062000 audit[2251]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:28.062000 audit[2251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffcf06d3e0 a2=0 a3=1 items=0 ppid=2198 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 18:51:28.064353 kubelet[2198]: I1002 18:51:28.064321 2198 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 18:51:28.066000 audit[2253]: NETFILTER_CFG table=mangle:17 family=2 entries=1 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:28.066000 audit[2253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd52fbda0 a2=0 a3=1 items=0 ppid=2198 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.066000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 18:51:28.067000 audit[2252]: NETFILTER_CFG table=mangle:18 family=10 entries=2 op=nft_register_chain pid=2252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.067000 audit[2252]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe95e45d0 a2=0 a3=1 items=0 ppid=2198 pid=2252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 18:51:28.071000 audit[2256]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_chain pid=2256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:28.071000 audit[2256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdcd740c0 a2=0 a3=1 items=0 ppid=2198 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.071000 audit[2255]: NETFILTER_CFG table=nat:20 family=10 entries=2 op=nft_register_chain pid=2255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.071000 audit[2255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff6b6b0b0 a2=0 a3=1 items=0 ppid=2198 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.071000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 18:51:28.071000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 18:51:28.075000 audit[2258]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=2258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:28.078912 kernel: kauditd_printk_skb: 498 callbacks suppressed Oct 2 18:51:28.079017 kernel: audit: type=1325 audit(1696272688.075:616): table=filter:21 family=2 entries=1 op=nft_register_chain pid=2258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:28.075000 audit[2258]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1cb0280 a2=0 a3=1 items=0 ppid=2198 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.096939 kernel: audit: type=1300 audit(1696272688.075:616): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1cb0280 a2=0 a3=1 items=0 ppid=2198 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 18:51:28.103379 kernel: audit: type=1327 audit(1696272688.075:616): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 18:51:28.083000 audit[2259]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=2259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.109447 kernel: audit: type=1325 audit(1696272688.083:617): table=nat:22 family=10 entries=1 op=nft_register_rule pid=2259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.083000 audit[2259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd97e1f90 a2=0 a3=1 items=0 ppid=2198 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.116800 kubelet[2198]: E1002 18:51:28.116679 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d19d59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.28.169 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821856089, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 28, 36570805, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d19d59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:28.121676 kernel: audit: type=1300 audit(1696272688.083:617): arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd97e1f90 a2=0 a3=1 items=0 ppid=2198 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 18:51:28.129399 kernel: audit: type=1327 audit(1696272688.083:617): proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 18:51:28.096000 audit[2260]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=2260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.131965 kubelet[2198]: E1002 18:51:28.131927 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:28.135666 kernel: audit: type=1325 audit(1696272688.096:618): table=filter:23 family=10 entries=2 op=nft_register_chain pid=2260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.096000 audit[2260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffffb728710 a2=0 a3=1 items=0 ppid=2198 pid=2260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.147757 kernel: audit: type=1300 audit(1696272688.096:618): arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffffb728710 a2=0 a3=1 items=0 ppid=2198 pid=2260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 18:51:28.153805 kernel: audit: type=1327 audit(1696272688.096:618): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 18:51:28.154257 kernel: audit: type=1325 audit(1696272688.108:619): table=filter:24 family=10 entries=1 op=nft_register_rule pid=2262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.108000 audit[2262]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.108000 audit[2262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc44933f0 a2=0 a3=1 items=0 ppid=2198 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.108000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 18:51:28.121000 audit[2263]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.121000 audit[2263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd5132330 a2=0 a3=1 items=0 ppid=2198 pid=2263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 18:51:28.129000 audit[2264]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.129000 audit[2264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3f25720 a2=0 a3=1 items=0 ppid=2198 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.129000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 18:51:28.147000 audit[2266]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.147000 audit[2266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff2c627d0 a2=0 a3=1 items=0 ppid=2198 pid=2266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 18:51:28.163000 audit[2268]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.163000 audit[2268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffb383db0 a2=0 a3=1 items=0 ppid=2198 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 18:51:28.171000 audit[2270]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.171000 audit[2270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffeed06d50 a2=0 a3=1 items=0 ppid=2198 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 18:51:28.179000 audit[2272]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.179000 audit[2272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffefaa2680 a2=0 a3=1 items=0 ppid=2198 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 18:51:28.189000 audit[2274]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.189000 audit[2274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffcf607090 a2=0 a3=1 items=0 ppid=2198 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 18:51:28.194583 kubelet[2198]: I1002 18:51:28.194551 2198 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 18:51:28.194825 kubelet[2198]: I1002 18:51:28.194786 2198 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 18:51:28.194959 kubelet[2198]: I1002 18:51:28.194938 2198 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 18:51:28.195159 kubelet[2198]: E1002 18:51:28.195137 2198 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 18:51:28.197517 kubelet[2198]: W1002 18:51:28.197453 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 18:51:28.197517 kubelet[2198]: E1002 18:51:28.197506 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 18:51:28.198000 audit[2275]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.198000 audit[2275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc691d080 a2=0 a3=1 items=0 ppid=2198 pid=2275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.198000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 18:51:28.202000 audit[2276]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.202000 audit[2276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb93fe20 a2=0 a3=1 items=0 ppid=2198 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.202000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 18:51:28.206000 audit[2277]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:28.206000 audit[2277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1c0f210 a2=0 a3=1 items=0 ppid=2198 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:28.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 18:51:28.233106 kubelet[2198]: E1002 18:51:28.233054 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:28.334183 kubelet[2198]: E1002 18:51:28.334132 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:28.384336 kubelet[2198]: E1002 18:51:28.384287 2198 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.28.169" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 18:51:28.434840 kubelet[2198]: E1002 18:51:28.434789 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:28.439915 kubelet[2198]: I1002 18:51:28.439869 2198 kubelet_node_status.go:70] "Attempting to register node" node="172.31.28.169" Oct 2 18:51:28.441620 kubelet[2198]: E1002 18:51:28.441514 2198 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.28.169" Oct 2 18:51:28.443029 kubelet[2198]: E1002 18:51:28.442914 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d14733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.28.169 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821834035, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 28, 439826773, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d14733" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:28.517362 kubelet[2198]: E1002 18:51:28.517247 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d187f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.28.169 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821850611, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 28, 439834677, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d187f3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:28.535545 kubelet[2198]: E1002 18:51:28.535468 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:28.592756 kubelet[2198]: W1002 18:51:28.592716 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 18:51:28.593009 kubelet[2198]: E1002 18:51:28.592985 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 18:51:28.635886 kubelet[2198]: E1002 18:51:28.635840 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:28.707122 kubelet[2198]: E1002 18:51:28.707007 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:28.716893 kubelet[2198]: E1002 18:51:28.716777 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d19d59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.28.169 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821856089, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 28, 439839169, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d19d59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:28.723049 kubelet[2198]: W1002 18:51:28.723016 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 18:51:28.723290 kubelet[2198]: E1002 18:51:28.723266 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 18:51:28.736155 kubelet[2198]: E1002 18:51:28.736105 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:28.836307 kubelet[2198]: E1002 18:51:28.836248 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:28.869334 kubelet[2198]: W1002 18:51:28.869294 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.28.169" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 18:51:28.869484 kubelet[2198]: E1002 18:51:28.869341 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.28.169" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 18:51:28.936695 kubelet[2198]: E1002 18:51:28.936608 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:29.037540 kubelet[2198]: E1002 18:51:29.037379 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:29.069344 kubelet[2198]: W1002 18:51:29.069298 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 18:51:29.069344 kubelet[2198]: E1002 18:51:29.069346 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 18:51:29.138547 kubelet[2198]: E1002 18:51:29.138485 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:29.186597 kubelet[2198]: E1002 18:51:29.186556 2198 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.28.169" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 18:51:29.238875 kubelet[2198]: E1002 18:51:29.238811 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:29.243057 kubelet[2198]: I1002 18:51:29.243007 2198 kubelet_node_status.go:70] "Attempting to register node" node="172.31.28.169" Oct 2 18:51:29.244326 kubelet[2198]: E1002 18:51:29.244296 2198 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.28.169" Oct 2 18:51:29.245254 kubelet[2198]: E1002 18:51:29.245129 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d14733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.28.169 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821834035, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 29, 242964915, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d14733" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:29.247341 kubelet[2198]: E1002 18:51:29.247237 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d187f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.28.169 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821850611, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 29, 242972181, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d187f3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:29.316649 kubelet[2198]: E1002 18:51:29.316532 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d19d59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.28.169 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821856089, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 29, 242976589, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d19d59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:29.339839 kubelet[2198]: E1002 18:51:29.339784 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:29.440681 kubelet[2198]: E1002 18:51:29.440636 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:29.541478 kubelet[2198]: E1002 18:51:29.541442 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:29.642596 kubelet[2198]: E1002 18:51:29.642475 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:29.707413 kubelet[2198]: E1002 18:51:29.707369 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:29.743319 kubelet[2198]: E1002 18:51:29.743284 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:29.843975 kubelet[2198]: E1002 18:51:29.843936 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:29.945220 kubelet[2198]: E1002 18:51:29.945061 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:30.045949 kubelet[2198]: E1002 18:51:30.045911 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:30.147006 kubelet[2198]: E1002 18:51:30.146974 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:30.247790 kubelet[2198]: E1002 18:51:30.247659 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:30.348839 kubelet[2198]: E1002 18:51:30.348784 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:30.370924 kubelet[2198]: W1002 18:51:30.370894 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 18:51:30.371158 kubelet[2198]: E1002 18:51:30.371125 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 18:51:30.449710 kubelet[2198]: E1002 18:51:30.449661 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:30.550677 kubelet[2198]: E1002 18:51:30.550631 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:30.651641 kubelet[2198]: E1002 18:51:30.651593 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:30.708591 kubelet[2198]: E1002 18:51:30.708413 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:30.752233 kubelet[2198]: E1002 18:51:30.752126 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:30.789076 kubelet[2198]: E1002 18:51:30.789000 2198 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.28.169" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 18:51:30.846654 kubelet[2198]: I1002 18:51:30.846466 2198 kubelet_node_status.go:70] "Attempting to register node" node="172.31.28.169" Oct 2 18:51:30.848750 kubelet[2198]: E1002 18:51:30.848612 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d14733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.28.169 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821834035, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 30, 846365883, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d14733" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:30.849442 kubelet[2198]: E1002 18:51:30.849397 2198 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.28.169" Oct 2 18:51:30.850565 kubelet[2198]: E1002 18:51:30.850422 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d187f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.28.169 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821850611, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 30, 846414438, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d187f3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:30.852141 kubelet[2198]: E1002 18:51:30.852017 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d19d59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.28.169 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821856089, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 30, 846421391, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d19d59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:30.853317 kubelet[2198]: E1002 18:51:30.853270 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:30.866315 kubelet[2198]: W1002 18:51:30.866272 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 18:51:30.866315 kubelet[2198]: E1002 18:51:30.866322 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 18:51:30.954216 kubelet[2198]: E1002 18:51:30.954159 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:31.055119 kubelet[2198]: E1002 18:51:31.055078 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:31.156105 kubelet[2198]: E1002 18:51:31.155972 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:31.257101 kubelet[2198]: E1002 18:51:31.257041 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:31.358077 kubelet[2198]: E1002 18:51:31.358014 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:31.459059 kubelet[2198]: E1002 18:51:31.458921 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:31.560011 kubelet[2198]: E1002 18:51:31.559939 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:31.661072 kubelet[2198]: E1002 18:51:31.660999 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:31.696317 kubelet[2198]: W1002 18:51:31.696252 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.28.169" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 18:51:31.696492 kubelet[2198]: E1002 18:51:31.696334 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.28.169" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 18:51:31.709512 kubelet[2198]: E1002 18:51:31.709371 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:31.761562 kubelet[2198]: E1002 18:51:31.761509 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:31.805885 kubelet[2198]: W1002 18:51:31.805839 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 18:51:31.806106 kubelet[2198]: E1002 18:51:31.806083 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 18:51:31.862525 kubelet[2198]: E1002 18:51:31.862477 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:31.963925 kubelet[2198]: E1002 18:51:31.963767 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:32.064747 kubelet[2198]: E1002 18:51:32.064680 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:32.165761 kubelet[2198]: E1002 18:51:32.165685 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:32.266174 kubelet[2198]: E1002 18:51:32.266019 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:32.367119 kubelet[2198]: E1002 18:51:32.367037 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:32.468033 kubelet[2198]: E1002 18:51:32.467972 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:32.569143 kubelet[2198]: E1002 18:51:32.569066 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:32.669934 kubelet[2198]: E1002 18:51:32.669884 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:32.710558 kubelet[2198]: E1002 18:51:32.710487 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:32.770539 kubelet[2198]: E1002 18:51:32.770491 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:32.871282 kubelet[2198]: E1002 18:51:32.871096 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:32.887343 kubelet[2198]: E1002 18:51:32.887251 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:51:32.972281 kubelet[2198]: E1002 18:51:32.972230 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.073058 kubelet[2198]: E1002 18:51:33.072989 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.173896 kubelet[2198]: E1002 18:51:33.173742 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.274521 kubelet[2198]: E1002 18:51:33.274449 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.375266 kubelet[2198]: E1002 18:51:33.375218 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.476184 kubelet[2198]: E1002 18:51:33.476025 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.576835 kubelet[2198]: E1002 18:51:33.576781 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.677486 kubelet[2198]: E1002 18:51:33.677432 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.711083 kubelet[2198]: E1002 18:51:33.711027 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:33.777882 kubelet[2198]: E1002 18:51:33.777729 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.877872 kubelet[2198]: E1002 18:51:33.877824 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.978909 kubelet[2198]: E1002 18:51:33.978826 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:33.991470 kubelet[2198]: E1002 18:51:33.991432 2198 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.28.169" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 18:51:34.051134 kubelet[2198]: I1002 18:51:34.050689 2198 kubelet_node_status.go:70] "Attempting to register node" node="172.31.28.169" Oct 2 18:51:34.052766 kubelet[2198]: E1002 18:51:34.052721 2198 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.28.169" Oct 2 18:51:34.053085 kubelet[2198]: E1002 18:51:34.052959 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d14733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.28.169 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821834035, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 34, 50623339, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d14733" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:34.054861 kubelet[2198]: E1002 18:51:34.054761 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d187f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.28.169 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821850611, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 34, 50647078, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d187f3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:34.056309 kubelet[2198]: E1002 18:51:34.056176 2198 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.169.178a5f05b7d19d59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.28.169", UID:"172.31.28.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.28.169 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.28.169"}, FirstTimestamp:time.Date(2023, time.October, 2, 18, 51, 27, 821856089, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 18, 51, 34, 50652266, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.28.169.178a5f05b7d19d59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 18:51:34.079563 kubelet[2198]: E1002 18:51:34.079517 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:34.180328 kubelet[2198]: E1002 18:51:34.180259 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:34.280539 kubelet[2198]: E1002 18:51:34.280470 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:34.381250 kubelet[2198]: E1002 18:51:34.381080 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:34.481854 kubelet[2198]: E1002 18:51:34.481819 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:34.582519 kubelet[2198]: E1002 18:51:34.582473 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:34.683362 kubelet[2198]: E1002 18:51:34.683235 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:34.711711 kubelet[2198]: E1002 18:51:34.711647 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:34.784281 kubelet[2198]: E1002 18:51:34.784238 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:34.882288 kubelet[2198]: W1002 18:51:34.882234 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 18:51:34.882842 kubelet[2198]: E1002 18:51:34.882292 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 18:51:34.885354 kubelet[2198]: E1002 18:51:34.885322 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:34.986381 kubelet[2198]: E1002 18:51:34.986260 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:35.086912 kubelet[2198]: E1002 18:51:35.086854 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:35.187560 kubelet[2198]: E1002 18:51:35.187518 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:35.206597 kubelet[2198]: W1002 18:51:35.206558 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.28.169" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 18:51:35.206722 kubelet[2198]: E1002 18:51:35.206605 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.28.169" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 18:51:35.288834 kubelet[2198]: E1002 18:51:35.288736 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:35.389642 kubelet[2198]: E1002 18:51:35.389593 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:35.444639 kubelet[2198]: W1002 18:51:35.444591 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 18:51:35.444639 kubelet[2198]: E1002 18:51:35.444640 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 18:51:35.489949 kubelet[2198]: E1002 18:51:35.489902 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:35.590969 kubelet[2198]: E1002 18:51:35.590579 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:35.691289 kubelet[2198]: E1002 18:51:35.691235 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:35.712590 kubelet[2198]: E1002 18:51:35.712533 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:35.791808 kubelet[2198]: E1002 18:51:35.791776 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:35.892752 kubelet[2198]: E1002 18:51:35.892397 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:35.993238 kubelet[2198]: E1002 18:51:35.993177 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:36.094009 kubelet[2198]: E1002 18:51:36.093979 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:36.195340 kubelet[2198]: E1002 18:51:36.194967 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:36.296164 kubelet[2198]: E1002 18:51:36.296127 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:36.324909 kubelet[2198]: W1002 18:51:36.324859 2198 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 18:51:36.325031 kubelet[2198]: E1002 18:51:36.324932 2198 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 18:51:36.396600 kubelet[2198]: E1002 18:51:36.396570 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:36.497618 kubelet[2198]: E1002 18:51:36.497232 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:36.598147 kubelet[2198]: E1002 18:51:36.598096 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:36.699226 kubelet[2198]: E1002 18:51:36.699161 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:36.713667 kubelet[2198]: E1002 18:51:36.713640 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:36.800251 kubelet[2198]: E1002 18:51:36.800219 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:36.901337 kubelet[2198]: E1002 18:51:36.901288 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:37.002099 kubelet[2198]: E1002 18:51:37.002067 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:37.103249 kubelet[2198]: E1002 18:51:37.102835 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:37.202989 kubelet[2198]: E1002 18:51:37.202938 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:37.303680 kubelet[2198]: E1002 18:51:37.303597 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:37.404804 kubelet[2198]: E1002 18:51:37.404439 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:37.505322 kubelet[2198]: E1002 18:51:37.505271 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:37.607055 kubelet[2198]: E1002 18:51:37.606559 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:37.689008 kubelet[2198]: I1002 18:51:37.688363 2198 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 18:51:37.707372 kubelet[2198]: E1002 18:51:37.707296 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:37.714662 kubelet[2198]: E1002 18:51:37.714598 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:37.807710 kubelet[2198]: E1002 18:51:37.807644 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:37.889215 kubelet[2198]: E1002 18:51:37.889160 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:51:37.890435 kubelet[2198]: E1002 18:51:37.890397 2198 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.28.169\" not found" Oct 2 18:51:37.908072 kubelet[2198]: E1002 18:51:37.907987 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:38.009573 kubelet[2198]: E1002 18:51:38.008993 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:38.110033 kubelet[2198]: E1002 18:51:38.109965 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:38.145355 kubelet[2198]: E1002 18:51:38.145297 2198 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.28.169" not found Oct 2 18:51:38.210419 kubelet[2198]: E1002 18:51:38.210376 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:38.310554 kubelet[2198]: E1002 18:51:38.310499 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:38.411417 kubelet[2198]: E1002 18:51:38.411370 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:38.512339 kubelet[2198]: E1002 18:51:38.512290 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:38.613465 kubelet[2198]: E1002 18:51:38.613114 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:38.714205 kubelet[2198]: E1002 18:51:38.714154 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:38.715350 kubelet[2198]: E1002 18:51:38.715310 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:38.814981 kubelet[2198]: E1002 18:51:38.814949 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:38.916567 kubelet[2198]: E1002 18:51:38.916220 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:39.018239 kubelet[2198]: E1002 18:51:39.018184 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:39.119085 kubelet[2198]: E1002 18:51:39.119048 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:39.183759 kubelet[2198]: E1002 18:51:39.183431 2198 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.28.169" not found Oct 2 18:51:39.219496 kubelet[2198]: E1002 18:51:39.219460 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:39.320561 kubelet[2198]: E1002 18:51:39.320524 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:39.421553 kubelet[2198]: E1002 18:51:39.421500 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:39.522594 kubelet[2198]: E1002 18:51:39.522254 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:39.623374 kubelet[2198]: E1002 18:51:39.623320 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:39.716214 kubelet[2198]: E1002 18:51:39.716147 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:39.723483 kubelet[2198]: E1002 18:51:39.723449 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:39.823821 kubelet[2198]: E1002 18:51:39.823773 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:39.924931 kubelet[2198]: E1002 18:51:39.924891 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:40.025808 kubelet[2198]: E1002 18:51:40.025764 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:40.127092 kubelet[2198]: E1002 18:51:40.126771 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:40.227576 kubelet[2198]: E1002 18:51:40.227535 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:40.328377 kubelet[2198]: E1002 18:51:40.328328 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:40.399051 kubelet[2198]: E1002 18:51:40.398727 2198 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.28.169\" not found" node="172.31.28.169" Oct 2 18:51:40.428953 kubelet[2198]: E1002 18:51:40.428918 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:40.454507 kubelet[2198]: I1002 18:51:40.454361 2198 kubelet_node_status.go:70] "Attempting to register node" node="172.31.28.169" Oct 2 18:51:40.529665 kubelet[2198]: E1002 18:51:40.529624 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:40.585890 kubelet[2198]: I1002 18:51:40.585762 2198 kubelet_node_status.go:73] "Successfully registered node" node="172.31.28.169" Oct 2 18:51:40.630248 kubelet[2198]: E1002 18:51:40.630165 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:40.631000 audit[1995]: USER_END pid=1995 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:40.632750 sudo[1995]: pam_unix(sudo:session): session closed for user root Oct 2 18:51:40.635561 kernel: kauditd_printk_skb: 32 callbacks suppressed Oct 2 18:51:40.635659 kernel: audit: type=1106 audit(1696272700.631:630): pid=1995 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:40.632000 audit[1995]: CRED_DISP pid=1995 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:40.652707 kernel: audit: type=1104 audit(1696272700.632:631): pid=1995 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 18:51:40.666536 sshd[1992]: pam_unix(sshd:session): session closed for user core Oct 2 18:51:40.667000 audit[1992]: USER_END pid=1992 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:40.670965 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 18:51:40.672087 systemd[1]: sshd@6-172.31.28.169:22-139.178.89.65:58468.service: Deactivated successfully. Oct 2 18:51:40.674874 systemd-logind[1725]: Session 7 logged out. Waiting for processes to exit. Oct 2 18:51:40.677685 systemd-logind[1725]: Removed session 7. Oct 2 18:51:40.667000 audit[1992]: CRED_DISP pid=1992 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:40.690073 kernel: audit: type=1106 audit(1696272700.667:632): pid=1992 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:40.690139 kernel: audit: type=1104 audit(1696272700.667:633): pid=1992 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 18:51:40.690182 kernel: audit: type=1131 audit(1696272700.667:634): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.169:22-139.178.89.65:58468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:40.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.169:22-139.178.89.65:58468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:40.716823 kubelet[2198]: E1002 18:51:40.716738 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:40.731217 kubelet[2198]: E1002 18:51:40.731153 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:40.831945 kubelet[2198]: E1002 18:51:40.831905 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:40.933658 kubelet[2198]: E1002 18:51:40.932833 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:41.033908 kubelet[2198]: E1002 18:51:41.033853 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:41.134779 kubelet[2198]: E1002 18:51:41.134729 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:41.236068 kubelet[2198]: E1002 18:51:41.235750 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:41.336610 kubelet[2198]: E1002 18:51:41.336576 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:41.437310 kubelet[2198]: E1002 18:51:41.437261 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:41.538357 kubelet[2198]: E1002 18:51:41.538045 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:41.638702 kubelet[2198]: E1002 18:51:41.638669 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:41.717306 kubelet[2198]: E1002 18:51:41.717259 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:41.739104 kubelet[2198]: E1002 18:51:41.739073 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:41.839900 kubelet[2198]: E1002 18:51:41.839858 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:41.940782 kubelet[2198]: E1002 18:51:41.940742 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:42.041321 kubelet[2198]: E1002 18:51:42.041267 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:42.142350 kubelet[2198]: E1002 18:51:42.141936 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:42.242756 kubelet[2198]: E1002 18:51:42.242724 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:42.343459 kubelet[2198]: E1002 18:51:42.343411 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:42.444464 kubelet[2198]: E1002 18:51:42.444147 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:42.544911 kubelet[2198]: E1002 18:51:42.544879 2198 kubelet.go:2448] "Error getting node" err="node \"172.31.28.169\" not found" Oct 2 18:51:42.646162 kubelet[2198]: I1002 18:51:42.646127 2198 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 18:51:42.646915 env[1730]: time="2023-10-02T18:51:42.646847975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 18:51:42.647812 kubelet[2198]: I1002 18:51:42.647680 2198 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 18:51:42.648349 kubelet[2198]: E1002 18:51:42.648308 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:51:42.717776 kubelet[2198]: I1002 18:51:42.717477 2198 apiserver.go:52] "Watching apiserver" Oct 2 18:51:42.717776 kubelet[2198]: E1002 18:51:42.717597 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:42.721482 kubelet[2198]: I1002 18:51:42.721430 2198 topology_manager.go:205] "Topology Admit Handler" Oct 2 18:51:42.721625 kubelet[2198]: I1002 18:51:42.721562 2198 topology_manager.go:205] "Topology Admit Handler" Oct 2 18:51:42.731756 systemd[1]: Created slice kubepods-burstable-pod8841f515_58a8_4e3a_8730_62a6b2acec2c.slice. Oct 2 18:51:42.754816 systemd[1]: Created slice kubepods-besteffort-pod3fb01704_6014_43ff_a3be_a87f72ea114a.slice. Oct 2 18:51:42.841916 kubelet[2198]: I1002 18:51:42.841857 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-hostproc\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842115 kubelet[2198]: I1002 18:51:42.841934 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-host-proc-sys-kernel\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842115 kubelet[2198]: I1002 18:51:42.841986 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3fb01704-6014-43ff-a3be-a87f72ea114a-kube-proxy\") pod \"kube-proxy-wj2vv\" (UID: \"3fb01704-6014-43ff-a3be-a87f72ea114a\") " pod="kube-system/kube-proxy-wj2vv" Oct 2 18:51:42.842115 kubelet[2198]: I1002 18:51:42.842044 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-run\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842115 kubelet[2198]: I1002 18:51:42.842092 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cni-path\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842378 kubelet[2198]: I1002 18:51:42.842137 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-etc-cni-netd\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842378 kubelet[2198]: I1002 18:51:42.842180 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8841f515-58a8-4e3a-8730-62a6b2acec2c-clustermesh-secrets\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842378 kubelet[2198]: I1002 18:51:42.842254 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-config-path\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842378 kubelet[2198]: I1002 18:51:42.842307 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fb01704-6014-43ff-a3be-a87f72ea114a-lib-modules\") pod \"kube-proxy-wj2vv\" (UID: \"3fb01704-6014-43ff-a3be-a87f72ea114a\") " pod="kube-system/kube-proxy-wj2vv" Oct 2 18:51:42.842378 kubelet[2198]: I1002 18:51:42.842350 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-cgroup\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842669 kubelet[2198]: I1002 18:51:42.842393 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgzrc\" (UniqueName: \"kubernetes.io/projected/8841f515-58a8-4e3a-8730-62a6b2acec2c-kube-api-access-lgzrc\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842669 kubelet[2198]: I1002 18:51:42.842436 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fb01704-6014-43ff-a3be-a87f72ea114a-xtables-lock\") pod \"kube-proxy-wj2vv\" (UID: \"3fb01704-6014-43ff-a3be-a87f72ea114a\") " pod="kube-system/kube-proxy-wj2vv" Oct 2 18:51:42.842669 kubelet[2198]: I1002 18:51:42.842477 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-xtables-lock\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842669 kubelet[2198]: I1002 18:51:42.842518 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-lib-modules\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842669 kubelet[2198]: I1002 18:51:42.842564 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-host-proc-sys-net\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842669 kubelet[2198]: I1002 18:51:42.842605 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8841f515-58a8-4e3a-8730-62a6b2acec2c-hubble-tls\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842988 kubelet[2198]: I1002 18:51:42.842654 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbpgf\" (UniqueName: \"kubernetes.io/projected/3fb01704-6014-43ff-a3be-a87f72ea114a-kube-api-access-cbpgf\") pod \"kube-proxy-wj2vv\" (UID: \"3fb01704-6014-43ff-a3be-a87f72ea114a\") " pod="kube-system/kube-proxy-wj2vv" Oct 2 18:51:42.842988 kubelet[2198]: I1002 18:51:42.842701 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-bpf-maps\") pod \"cilium-54x2v\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " pod="kube-system/cilium-54x2v" Oct 2 18:51:42.842988 kubelet[2198]: I1002 18:51:42.842719 2198 reconciler.go:169] "Reconciler: start to sync state" Oct 2 18:51:42.891219 kubelet[2198]: E1002 18:51:42.891136 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:51:42.894465 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 18:51:42.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:42.904231 kernel: audit: type=1131 audit(1696272702.894:635): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 18:51:42.921000 audit: BPF prog-id=61 op=UNLOAD Oct 2 18:51:42.921000 audit: BPF prog-id=60 op=UNLOAD Oct 2 18:51:42.927750 kernel: audit: type=1334 audit(1696272702.921:636): prog-id=61 op=UNLOAD Oct 2 18:51:42.927831 kernel: audit: type=1334 audit(1696272702.921:637): prog-id=60 op=UNLOAD Oct 2 18:51:42.927873 kernel: audit: type=1334 audit(1696272702.921:638): prog-id=59 op=UNLOAD Oct 2 18:51:42.921000 audit: BPF prog-id=59 op=UNLOAD Oct 2 18:51:43.364170 env[1730]: time="2023-10-02T18:51:43.364087816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj2vv,Uid:3fb01704-6014-43ff-a3be-a87f72ea114a,Namespace:kube-system,Attempt:0,}" Oct 2 18:51:43.647818 env[1730]: time="2023-10-02T18:51:43.647518281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-54x2v,Uid:8841f515-58a8-4e3a-8730-62a6b2acec2c,Namespace:kube-system,Attempt:0,}" Oct 2 18:51:43.718430 kubelet[2198]: E1002 18:51:43.718358 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:43.913095 env[1730]: time="2023-10-02T18:51:43.912757172Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:43.915634 env[1730]: time="2023-10-02T18:51:43.915584339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:43.918449 env[1730]: time="2023-10-02T18:51:43.918350633Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:43.921238 env[1730]: time="2023-10-02T18:51:43.921149303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:43.924938 env[1730]: time="2023-10-02T18:51:43.924888189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:43.930015 env[1730]: time="2023-10-02T18:51:43.929965722Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:43.933656 env[1730]: time="2023-10-02T18:51:43.933571973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:43.935780 env[1730]: time="2023-10-02T18:51:43.935730603Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:43.960158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241422177.mount: Deactivated successfully. Oct 2 18:51:43.991752 env[1730]: time="2023-10-02T18:51:43.991578333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 18:51:43.991752 env[1730]: time="2023-10-02T18:51:43.991656024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 18:51:43.991752 env[1730]: time="2023-10-02T18:51:43.991682469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 18:51:43.992511 env[1730]: time="2023-10-02T18:51:43.992255862Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22e10e7d09e7333cd83fd8e001f074514ed5ec396a2c4efacd5b60174c582118 pid=2295 runtime=io.containerd.runc.v2 Oct 2 18:51:44.027964 env[1730]: time="2023-10-02T18:51:44.027616828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 18:51:44.028800 env[1730]: time="2023-10-02T18:51:44.028341085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 18:51:44.029072 env[1730]: time="2023-10-02T18:51:44.028988109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 18:51:44.029768 env[1730]: time="2023-10-02T18:51:44.029692919Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458 pid=2317 runtime=io.containerd.runc.v2 Oct 2 18:51:44.037138 systemd[1]: Started cri-containerd-22e10e7d09e7333cd83fd8e001f074514ed5ec396a2c4efacd5b60174c582118.scope. Oct 2 18:51:44.047724 systemd[1]: run-containerd-runc-k8s.io-22e10e7d09e7333cd83fd8e001f074514ed5ec396a2c4efacd5b60174c582118-runc.qM7XtA.mount: Deactivated successfully. Oct 2 18:51:44.088223 systemd[1]: Started cri-containerd-3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458.scope. Oct 2 18:51:44.108000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.108000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.120349 kernel: audit: type=1400 audit(1696272704.108:639): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.108000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.108000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.108000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.108000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.108000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.108000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.108000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.115000 audit: BPF prog-id=73 op=LOAD Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2295 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:44.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232653130653764303965373333336364383366643865303031663037 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2295 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:44.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232653130653764303965373333336364383366643865303031663037 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit: BPF prog-id=74 op=LOAD Oct 2 18:51:44.116000 audit[2306]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2295 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:44.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232653130653764303965373333336364383366643865303031663037 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit: BPF prog-id=75 op=LOAD Oct 2 18:51:44.116000 audit[2306]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2295 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:44.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232653130653764303965373333336364383366643865303031663037 Oct 2 18:51:44.116000 audit: BPF prog-id=75 op=UNLOAD Oct 2 18:51:44.116000 audit: BPF prog-id=74 op=UNLOAD Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { perfmon } for pid=2306 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit[2306]: AVC avc: denied { bpf } for pid=2306 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.116000 audit: BPF prog-id=76 op=LOAD Oct 2 18:51:44.116000 audit[2306]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2295 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:44.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232653130653764303965373333336364383366643865303031663037 Oct 2 18:51:44.157000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.157000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.157000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.157000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.157000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.157000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.157000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.157000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.157000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.157000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.157000 audit: BPF prog-id=77 op=LOAD Oct 2 18:51:44.159000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.159000 audit[2329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2317 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:44.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335313361363266373862613266623264303061353065303939373937 Oct 2 18:51:44.159000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.159000 audit[2329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2317 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:44.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335313361363266373862613266623264303061353065303939373937 Oct 2 18:51:44.160000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.160000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.160000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.160000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.160000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.160000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.160000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.160000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.160000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.160000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.160000 audit: BPF prog-id=78 op=LOAD Oct 2 18:51:44.160000 audit[2329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2317 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:44.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335313361363266373862613266623264303061353065303939373937 Oct 2 18:51:44.162000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.162000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.162000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.162000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.162000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.162000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.162000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.162000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.162000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.162000 audit: BPF prog-id=79 op=LOAD Oct 2 18:51:44.162000 audit[2329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2317 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:44.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335313361363266373862613266623264303061353065303939373937 Oct 2 18:51:44.165000 audit: BPF prog-id=79 op=UNLOAD Oct 2 18:51:44.165000 audit: BPF prog-id=78 op=UNLOAD Oct 2 18:51:44.166000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.166000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.166000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.166000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.166000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.166000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.166000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.166000 audit[2329]: AVC avc: denied { perfmon } for pid=2329 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.166000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.166000 audit[2329]: AVC avc: denied { bpf } for pid=2329 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:44.166000 audit: BPF prog-id=80 op=LOAD Oct 2 18:51:44.166000 audit[2329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2317 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:44.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335313361363266373862613266623264303061353065303939373937 Oct 2 18:51:44.184501 env[1730]: time="2023-10-02T18:51:44.184424017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj2vv,Uid:3fb01704-6014-43ff-a3be-a87f72ea114a,Namespace:kube-system,Attempt:0,} returns sandbox id \"22e10e7d09e7333cd83fd8e001f074514ed5ec396a2c4efacd5b60174c582118\"" Oct 2 18:51:44.188478 env[1730]: time="2023-10-02T18:51:44.188402492Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 18:51:44.213628 env[1730]: time="2023-10-02T18:51:44.213547269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-54x2v,Uid:8841f515-58a8-4e3a-8730-62a6b2acec2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\"" Oct 2 18:51:44.719506 kubelet[2198]: E1002 18:51:44.719437 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:45.522086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2081652209.mount: Deactivated successfully. Oct 2 18:51:45.720027 kubelet[2198]: E1002 18:51:45.719970 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:46.133565 env[1730]: time="2023-10-02T18:51:46.133506312Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:46.136949 env[1730]: time="2023-10-02T18:51:46.136897972Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:46.139418 env[1730]: time="2023-10-02T18:51:46.139353855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:46.141902 env[1730]: time="2023-10-02T18:51:46.141841427Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:46.142964 env[1730]: time="2023-10-02T18:51:46.142921257Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e\"" Oct 2 18:51:46.144698 env[1730]: time="2023-10-02T18:51:46.144642049Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 18:51:46.146475 env[1730]: time="2023-10-02T18:51:46.146403688Z" level=info msg="CreateContainer within sandbox \"22e10e7d09e7333cd83fd8e001f074514ed5ec396a2c4efacd5b60174c582118\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 18:51:46.173492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1213866846.mount: Deactivated successfully. Oct 2 18:51:46.181921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229909485.mount: Deactivated successfully. Oct 2 18:51:46.189367 env[1730]: time="2023-10-02T18:51:46.189304107Z" level=info msg="CreateContainer within sandbox \"22e10e7d09e7333cd83fd8e001f074514ed5ec396a2c4efacd5b60174c582118\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f9caea8cb9072a101050c606f240871a56f72084fd6f367978a2b3ee745d0eb\"" Oct 2 18:51:46.190348 env[1730]: time="2023-10-02T18:51:46.190294992Z" level=info msg="StartContainer for \"2f9caea8cb9072a101050c606f240871a56f72084fd6f367978a2b3ee745d0eb\"" Oct 2 18:51:46.246312 systemd[1]: Started cri-containerd-2f9caea8cb9072a101050c606f240871a56f72084fd6f367978a2b3ee745d0eb.scope. Oct 2 18:51:46.314233 kernel: kauditd_printk_skb: 113 callbacks suppressed Oct 2 18:51:46.314370 kernel: audit: type=1400 audit(1696272706.303:675): avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2295 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.328239 kernel: audit: type=1300 audit(1696272706.303:675): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2295 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266396361656138636239303732613130313035306336303666323430 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.340262 kernel: audit: type=1327 audit(1696272706.303:675): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266396361656138636239303732613130313035306336303666323430 Oct 2 18:51:46.352235 kernel: audit: type=1400 audit(1696272706.303:676): avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.352385 kernel: audit: type=1400 audit(1696272706.303:676): avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.367652 kernel: audit: type=1400 audit(1696272706.303:676): avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.375496 kernel: audit: type=1400 audit(1696272706.303:676): avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.383489 kernel: audit: type=1400 audit(1696272706.303:676): avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.383554 kernel: audit: type=1400 audit(1696272706.303:676): avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.397397 env[1730]: time="2023-10-02T18:51:46.397337744Z" level=info msg="StartContainer for \"2f9caea8cb9072a101050c606f240871a56f72084fd6f367978a2b3ee745d0eb\" returns successfully" Oct 2 18:51:46.401284 kernel: audit: type=1400 audit(1696272706.303:676): avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.303000 audit: BPF prog-id=81 op=LOAD Oct 2 18:51:46.303000 audit[2378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2295 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266396361656138636239303732613130313035306336303666323430 Oct 2 18:51:46.306000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.306000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.306000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.306000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.306000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.306000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.306000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.306000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.306000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.306000 audit: BPF prog-id=82 op=LOAD Oct 2 18:51:46.306000 audit[2378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2295 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266396361656138636239303732613130313035306336303666323430 Oct 2 18:51:46.313000 audit: BPF prog-id=82 op=UNLOAD Oct 2 18:51:46.313000 audit: BPF prog-id=81 op=UNLOAD Oct 2 18:51:46.313000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.313000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.313000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.313000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.313000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.313000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.313000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.313000 audit[2378]: AVC avc: denied { perfmon } for pid=2378 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.313000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.313000 audit[2378]: AVC avc: denied { bpf } for pid=2378 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:51:46.313000 audit: BPF prog-id=83 op=LOAD Oct 2 18:51:46.313000 audit[2378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2295 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266396361656138636239303732613130313035306336303666323430 Oct 2 18:51:46.484811 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 18:51:46.484949 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 18:51:46.485032 kernel: IPVS: ipvs loaded. Oct 2 18:51:46.503234 kernel: IPVS: [rr] scheduler registered. Oct 2 18:51:46.516248 kernel: IPVS: [wrr] scheduler registered. Oct 2 18:51:46.529254 kernel: IPVS: [sh] scheduler registered. Oct 2 18:51:46.629000 audit[2436]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.629000 audit[2436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff39588b0 a2=0 a3=ffff86a336c0 items=0 ppid=2388 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.629000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 18:51:46.633000 audit[2437]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.633000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff90b67d0 a2=0 a3=ffffa5b036c0 items=0 ppid=2388 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.633000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 18:51:46.638000 audit[2438]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.638000 audit[2438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff90f9700 a2=0 a3=ffff8e6406c0 items=0 ppid=2388 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.638000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 18:51:46.644000 audit[2439]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.644000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd85cd680 a2=0 a3=ffffb0b946c0 items=0 ppid=2388 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 18:51:46.645000 audit[2440]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.645000 audit[2440]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe79fcf50 a2=0 a3=ffffbb4a16c0 items=0 ppid=2388 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 18:51:46.650000 audit[2441]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.650000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffddf19b20 a2=0 a3=ffffa139a6c0 items=0 ppid=2388 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 18:51:46.720333 kubelet[2198]: E1002 18:51:46.720262 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:46.733000 audit[2442]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.733000 audit[2442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe4f5f590 a2=0 a3=ffffa0aac6c0 items=0 ppid=2388 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 18:51:46.741000 audit[2444]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.741000 audit[2444]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd7348e50 a2=0 a3=ffffb28456c0 items=0 ppid=2388 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 18:51:46.753000 audit[2447]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.753000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffaff9660 a2=0 a3=ffffa50476c0 items=0 ppid=2388 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 18:51:46.757000 audit[2448]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.757000 audit[2448]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2b73bc0 a2=0 a3=ffffb51ed6c0 items=0 ppid=2388 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 18:51:46.765000 audit[2450]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.765000 audit[2450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffef413430 a2=0 a3=ffff8709a6c0 items=0 ppid=2388 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 18:51:46.769000 audit[2451]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.769000 audit[2451]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6b793f0 a2=0 a3=ffff92a6b6c0 items=0 ppid=2388 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 18:51:46.778000 audit[2453]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.778000 audit[2453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc3d8d5d0 a2=0 a3=ffffa07056c0 items=0 ppid=2388 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 18:51:46.791000 audit[2456]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.791000 audit[2456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd0b719e0 a2=0 a3=ffffa05296c0 items=0 ppid=2388 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.791000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 18:51:46.795000 audit[2457]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.795000 audit[2457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff538c730 a2=0 a3=ffff8ae6f6c0 items=0 ppid=2388 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 18:51:46.803000 audit[2459]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.803000 audit[2459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeada4f70 a2=0 a3=ffffb09386c0 items=0 ppid=2388 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.803000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 18:51:46.807000 audit[2460]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.807000 audit[2460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca037160 a2=0 a3=ffffaa94a6c0 items=0 ppid=2388 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 18:51:46.816000 audit[2462]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.816000 audit[2462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff3230100 a2=0 a3=ffffb1ccb6c0 items=0 ppid=2388 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 18:51:46.828000 audit[2465]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.828000 audit[2465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd8c6a840 a2=0 a3=ffffb01a96c0 items=0 ppid=2388 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 18:51:46.840000 audit[2468]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.840000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffff9b4180 a2=0 a3=ffffbe5d96c0 items=0 ppid=2388 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.840000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 18:51:46.844000 audit[2469]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.844000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc5f0d530 a2=0 a3=ffff8010b6c0 items=0 ppid=2388 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 18:51:46.851000 audit[2471]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.851000 audit[2471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffffcef4050 a2=0 a3=ffffb3acf6c0 items=0 ppid=2388 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 18:51:46.863000 audit[2474]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 18:51:46.863000 audit[2474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff5464820 a2=0 a3=ffffac2786c0 items=0 ppid=2388 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.863000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 18:51:46.890000 audit[2478]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 18:51:46.890000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffe5ef04a0 a2=0 a3=ffff9e4b36c0 items=0 ppid=2388 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 18:51:46.907000 audit[2478]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 18:51:46.907000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe5ef04a0 a2=0 a3=ffff9e4b36c0 items=0 ppid=2388 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 18:51:46.915000 audit[2482]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.915000 audit[2482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc83b50f0 a2=0 a3=ffff940616c0 items=0 ppid=2388 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 18:51:46.923000 audit[2484]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.923000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffec4a78d0 a2=0 a3=ffff92e276c0 items=0 ppid=2388 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 18:51:46.936000 audit[2487]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.936000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff1e419a0 a2=0 a3=ffffbaec46c0 items=0 ppid=2388 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 18:51:46.939000 audit[2488]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.939000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb802b70 a2=0 a3=ffffb21a86c0 items=0 ppid=2388 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 18:51:46.947000 audit[2490]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.947000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff0dd83b0 a2=0 a3=ffffbacae6c0 items=0 ppid=2388 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 18:51:46.951000 audit[2491]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.951000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbbb5ab0 a2=0 a3=ffff915a56c0 items=0 ppid=2388 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 18:51:46.960000 audit[2493]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.960000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe69ccf40 a2=0 a3=ffffaa95a6c0 items=0 ppid=2388 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 18:51:46.971000 audit[2496]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.971000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffeda52280 a2=0 a3=ffffb2dc56c0 items=0 ppid=2388 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.971000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 18:51:46.975000 audit[2497]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.975000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd280a2e0 a2=0 a3=ffff9556d6c0 items=0 ppid=2388 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 18:51:46.983000 audit[2499]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.983000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe6175800 a2=0 a3=ffffb01446c0 items=0 ppid=2388 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.983000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 18:51:46.987000 audit[2500]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.987000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8811ee0 a2=0 a3=ffffa18d66c0 items=0 ppid=2388 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 18:51:46.995000 audit[2502]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:46.995000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe04055d0 a2=0 a3=ffffbd1e46c0 items=0 ppid=2388 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:46.995000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 18:51:47.007000 audit[2505]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:47.007000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc6526000 a2=0 a3=ffffbd5de6c0 items=0 ppid=2388 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:47.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 18:51:47.019000 audit[2508]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:47.019000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4366740 a2=0 a3=ffff7f7b96c0 items=0 ppid=2388 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:47.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 18:51:47.022000 audit[2509]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:47.022000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff5a77280 a2=0 a3=ffff98eba6c0 items=0 ppid=2388 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:47.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 18:51:47.031000 audit[2511]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:47.031000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd4a3d7a0 a2=0 a3=ffffb8a716c0 items=0 ppid=2388 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:47.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 18:51:47.043000 audit[2514]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 18:51:47.043000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd50bbe00 a2=0 a3=ffff7f97f6c0 items=0 ppid=2388 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:47.043000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 18:51:47.064000 audit[2518]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 18:51:47.064000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffed6a3000 a2=0 a3=ffff97e936c0 items=0 ppid=2388 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:47.064000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 18:51:47.065000 audit[2518]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 18:51:47.065000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1860 a0=3 a1=ffffed6a3000 a2=0 a3=ffff97e936c0 items=0 ppid=2388 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:51:47.065000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 18:51:47.705182 kubelet[2198]: E1002 18:51:47.705130 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:47.720515 kubelet[2198]: E1002 18:51:47.720450 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:47.893878 kubelet[2198]: E1002 18:51:47.893824 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:51:48.721089 kubelet[2198]: E1002 18:51:48.721021 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:49.721580 kubelet[2198]: E1002 18:51:49.721503 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:50.722315 kubelet[2198]: E1002 18:51:50.722252 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:51.722639 kubelet[2198]: E1002 18:51:51.722517 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:52.723093 kubelet[2198]: E1002 18:51:52.722999 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:52.894876 kubelet[2198]: E1002 18:51:52.894769 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:51:53.410334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878809034.mount: Deactivated successfully. Oct 2 18:51:53.724039 kubelet[2198]: E1002 18:51:53.723874 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:54.724474 kubelet[2198]: E1002 18:51:54.724405 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:55.725520 kubelet[2198]: E1002 18:51:55.725452 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:56.726054 kubelet[2198]: E1002 18:51:56.725980 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:57.311870 env[1730]: time="2023-10-02T18:51:57.311804254Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:57.315151 env[1730]: time="2023-10-02T18:51:57.315101865Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:57.318301 env[1730]: time="2023-10-02T18:51:57.318231848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:51:57.319804 env[1730]: time="2023-10-02T18:51:57.319725279Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db\"" Oct 2 18:51:57.323475 env[1730]: time="2023-10-02T18:51:57.323310751Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 18:51:57.339130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204902093.mount: Deactivated successfully. Oct 2 18:51:57.349006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152721228.mount: Deactivated successfully. Oct 2 18:51:57.356820 env[1730]: time="2023-10-02T18:51:57.356758380Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\"" Oct 2 18:51:57.357928 env[1730]: time="2023-10-02T18:51:57.357882159Z" level=info msg="StartContainer for \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\"" Oct 2 18:51:57.404975 systemd[1]: Started cri-containerd-bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c.scope. Oct 2 18:51:57.413500 update_engine[1726]: I1002 18:51:57.412856 1726 update_attempter.cc:505] Updating boot flags... Oct 2 18:51:57.453086 systemd[1]: cri-containerd-bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c.scope: Deactivated successfully. Oct 2 18:51:57.728546 kubelet[2198]: E1002 18:51:57.726234 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:57.897244 kubelet[2198]: E1002 18:51:57.895930 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:51:58.334919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c-rootfs.mount: Deactivated successfully. Oct 2 18:51:58.726630 kubelet[2198]: E1002 18:51:58.726485 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:51:58.853901 env[1730]: time="2023-10-02T18:51:58.853816284Z" level=info msg="shim disconnected" id=bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c Oct 2 18:51:58.854494 env[1730]: time="2023-10-02T18:51:58.853896071Z" level=warning msg="cleaning up after shim disconnected" id=bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c namespace=k8s.io Oct 2 18:51:58.854494 env[1730]: time="2023-10-02T18:51:58.853920854Z" level=info msg="cleaning up dead shim" Oct 2 18:51:58.880601 env[1730]: time="2023-10-02T18:51:58.880514439Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:51:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2726 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:51:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:51:58.881131 env[1730]: time="2023-10-02T18:51:58.880981373Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 18:51:58.884403 env[1730]: time="2023-10-02T18:51:58.884333873Z" level=error msg="Failed to pipe stdout of container \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\"" error="reading from a closed fifo" Oct 2 18:51:58.884515 env[1730]: time="2023-10-02T18:51:58.884429766Z" level=error msg="Failed to pipe stderr of container \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\"" error="reading from a closed fifo" Oct 2 18:51:58.887380 env[1730]: time="2023-10-02T18:51:58.887253681Z" level=error msg="StartContainer for \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:51:58.888555 kubelet[2198]: E1002 18:51:58.887827 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c" Oct 2 18:51:58.888555 kubelet[2198]: E1002 18:51:58.888417 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:51:58.888555 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:51:58.888555 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:51:58.889316 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lgzrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:51:58.889467 kubelet[2198]: E1002 18:51:58.888512 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:51:59.284326 env[1730]: time="2023-10-02T18:51:59.284262943Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 18:51:59.303006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781365471.mount: Deactivated successfully. Oct 2 18:51:59.318114 env[1730]: time="2023-10-02T18:51:59.318030800Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\"" Oct 2 18:51:59.318749 env[1730]: time="2023-10-02T18:51:59.318689334Z" level=info msg="StartContainer for \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\"" Oct 2 18:51:59.369703 systemd[1]: run-containerd-runc-k8s.io-fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab-runc.YeBGQj.mount: Deactivated successfully. Oct 2 18:51:59.374613 systemd[1]: Started cri-containerd-fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab.scope. Oct 2 18:51:59.411130 systemd[1]: cri-containerd-fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab.scope: Deactivated successfully. Oct 2 18:51:59.422667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab-rootfs.mount: Deactivated successfully. Oct 2 18:51:59.435252 env[1730]: time="2023-10-02T18:51:59.435128082Z" level=info msg="shim disconnected" id=fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab Oct 2 18:51:59.435719 env[1730]: time="2023-10-02T18:51:59.435645731Z" level=warning msg="cleaning up after shim disconnected" id=fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab namespace=k8s.io Oct 2 18:51:59.435719 env[1730]: time="2023-10-02T18:51:59.435688300Z" level=info msg="cleaning up dead shim" Oct 2 18:51:59.464643 env[1730]: time="2023-10-02T18:51:59.464552551Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:51:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2766 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:51:59Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:51:59.465219 env[1730]: time="2023-10-02T18:51:59.465097982Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 18:51:59.466368 env[1730]: time="2023-10-02T18:51:59.466302740Z" level=error msg="Failed to pipe stdout of container \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\"" error="reading from a closed fifo" Oct 2 18:51:59.469535 env[1730]: time="2023-10-02T18:51:59.469469694Z" level=error msg="Failed to pipe stderr of container \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\"" error="reading from a closed fifo" Oct 2 18:51:59.472142 env[1730]: time="2023-10-02T18:51:59.472030717Z" level=error msg="StartContainer for \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:51:59.475325 kubelet[2198]: E1002 18:51:59.474096 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab" Oct 2 18:51:59.475325 kubelet[2198]: E1002 18:51:59.474336 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:51:59.475325 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:51:59.475325 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:51:59.475800 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lgzrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:51:59.476117 kubelet[2198]: E1002 18:51:59.474402 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:51:59.727726 kubelet[2198]: E1002 18:51:59.727670 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:00.282256 kubelet[2198]: I1002 18:52:00.282185 2198 scope.go:115] "RemoveContainer" containerID="bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c" Oct 2 18:52:00.283145 kubelet[2198]: I1002 18:52:00.282980 2198 scope.go:115] "RemoveContainer" containerID="bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c" Oct 2 18:52:00.285619 env[1730]: time="2023-10-02T18:52:00.285569112Z" level=info msg="RemoveContainer for \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\"" Oct 2 18:52:00.288825 env[1730]: time="2023-10-02T18:52:00.288749423Z" level=info msg="RemoveContainer for \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\"" Oct 2 18:52:00.289027 env[1730]: time="2023-10-02T18:52:00.288899381Z" level=error msg="RemoveContainer for \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\" failed" error="failed to set removing state for container \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\": container is already in removing state" Oct 2 18:52:00.289359 kubelet[2198]: E1002 18:52:00.289313 2198 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\": container is already in removing state" containerID="bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c" Oct 2 18:52:00.289584 kubelet[2198]: E1002 18:52:00.289562 2198 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c": container is already in removing state; Skipping pod "cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)" Oct 2 18:52:00.290147 kubelet[2198]: E1002 18:52:00.290121 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:52:00.293149 env[1730]: time="2023-10-02T18:52:00.293094422Z" level=info msg="RemoveContainer for \"bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c\" returns successfully" Oct 2 18:52:00.728662 kubelet[2198]: E1002 18:52:00.728597 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:01.286797 kubelet[2198]: E1002 18:52:01.286762 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:52:01.729654 kubelet[2198]: E1002 18:52:01.729560 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:01.960408 kubelet[2198]: W1002 18:52:01.960328 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8841f515_58a8_4e3a_8730_62a6b2acec2c.slice/cri-containerd-bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c.scope WatchSource:0}: container "bfd126d4beb599e30eb5461683311f4fac7fb6784c351d69d5638b314c5e182c" in namespace "k8s.io": not found Oct 2 18:52:02.730150 kubelet[2198]: E1002 18:52:02.730107 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:02.897725 kubelet[2198]: E1002 18:52:02.897679 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:03.731446 kubelet[2198]: E1002 18:52:03.731391 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:04.732252 kubelet[2198]: E1002 18:52:04.732172 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:05.068546 kubelet[2198]: W1002 18:52:05.068419 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8841f515_58a8_4e3a_8730_62a6b2acec2c.slice/cri-containerd-fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab.scope WatchSource:0}: task fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab not found: not found Oct 2 18:52:05.732629 kubelet[2198]: E1002 18:52:05.732592 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:06.734028 kubelet[2198]: E1002 18:52:06.733991 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:07.704324 kubelet[2198]: E1002 18:52:07.704261 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:07.735042 kubelet[2198]: E1002 18:52:07.735004 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:07.898746 kubelet[2198]: E1002 18:52:07.898714 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:08.735689 kubelet[2198]: E1002 18:52:08.735608 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:09.736748 kubelet[2198]: E1002 18:52:09.736682 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:10.737762 kubelet[2198]: E1002 18:52:10.737711 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:11.738908 kubelet[2198]: E1002 18:52:11.738845 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:12.200831 env[1730]: time="2023-10-02T18:52:12.200755704Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 18:52:12.219354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350028552.mount: Deactivated successfully. Oct 2 18:52:12.230287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103683063.mount: Deactivated successfully. Oct 2 18:52:12.233511 env[1730]: time="2023-10-02T18:52:12.233421772Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\"" Oct 2 18:52:12.235329 env[1730]: time="2023-10-02T18:52:12.235255675Z" level=info msg="StartContainer for \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\"" Oct 2 18:52:12.291518 systemd[1]: Started cri-containerd-227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c.scope. Oct 2 18:52:12.329045 systemd[1]: cri-containerd-227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c.scope: Deactivated successfully. Oct 2 18:52:12.354464 env[1730]: time="2023-10-02T18:52:12.354373611Z" level=info msg="shim disconnected" id=227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c Oct 2 18:52:12.354851 env[1730]: time="2023-10-02T18:52:12.354817347Z" level=warning msg="cleaning up after shim disconnected" id=227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c namespace=k8s.io Oct 2 18:52:12.354996 env[1730]: time="2023-10-02T18:52:12.354968195Z" level=info msg="cleaning up dead shim" Oct 2 18:52:12.381417 env[1730]: time="2023-10-02T18:52:12.381340165Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:52:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2803 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:52:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:52:12.381882 env[1730]: time="2023-10-02T18:52:12.381794317Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 18:52:12.382263 env[1730]: time="2023-10-02T18:52:12.382182490Z" level=error msg="Failed to pipe stdout of container \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\"" error="reading from a closed fifo" Oct 2 18:52:12.388330 env[1730]: time="2023-10-02T18:52:12.388257745Z" level=error msg="Failed to pipe stderr of container \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\"" error="reading from a closed fifo" Oct 2 18:52:12.390628 env[1730]: time="2023-10-02T18:52:12.390545656Z" level=error msg="StartContainer for \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:52:12.391017 kubelet[2198]: E1002 18:52:12.390978 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c" Oct 2 18:52:12.391185 kubelet[2198]: E1002 18:52:12.391145 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:52:12.391185 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:52:12.391185 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:52:12.391185 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lgzrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:52:12.391607 kubelet[2198]: E1002 18:52:12.391238 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:52:12.739419 kubelet[2198]: E1002 18:52:12.739353 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:12.900596 kubelet[2198]: E1002 18:52:12.900543 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:13.214864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c-rootfs.mount: Deactivated successfully. Oct 2 18:52:13.317922 kubelet[2198]: I1002 18:52:13.317869 2198 scope.go:115] "RemoveContainer" containerID="fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab" Oct 2 18:52:13.318470 kubelet[2198]: I1002 18:52:13.318437 2198 scope.go:115] "RemoveContainer" containerID="fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab" Oct 2 18:52:13.320853 env[1730]: time="2023-10-02T18:52:13.320786822Z" level=info msg="RemoveContainer for \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\"" Oct 2 18:52:13.324132 env[1730]: time="2023-10-02T18:52:13.324055775Z" level=info msg="RemoveContainer for \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\"" Oct 2 18:52:13.324648 env[1730]: time="2023-10-02T18:52:13.324581245Z" level=error msg="RemoveContainer for \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\" failed" error="failed to set removing state for container \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\": container is already in removing state" Oct 2 18:52:13.325074 kubelet[2198]: E1002 18:52:13.325040 2198 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\": container is already in removing state" containerID="fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab" Oct 2 18:52:13.325228 kubelet[2198]: E1002 18:52:13.325099 2198 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab": container is already in removing state; Skipping pod "cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)" Oct 2 18:52:13.325622 kubelet[2198]: E1002 18:52:13.325576 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:52:13.327303 env[1730]: time="2023-10-02T18:52:13.327232119Z" level=info msg="RemoveContainer for \"fa09f3047aa787ae587dc903f9995668c36092b0320063b5941be2a263ef8dab\" returns successfully" Oct 2 18:52:13.740584 kubelet[2198]: E1002 18:52:13.740509 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:14.741008 kubelet[2198]: E1002 18:52:14.740924 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:15.460515 kubelet[2198]: W1002 18:52:15.460463 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8841f515_58a8_4e3a_8730_62a6b2acec2c.slice/cri-containerd-227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c.scope WatchSource:0}: task 227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c not found: not found Oct 2 18:52:15.741745 kubelet[2198]: E1002 18:52:15.741546 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:16.742838 kubelet[2198]: E1002 18:52:16.742755 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:17.743303 kubelet[2198]: E1002 18:52:17.743236 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:17.901625 kubelet[2198]: E1002 18:52:17.901583 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:18.744329 kubelet[2198]: E1002 18:52:18.744279 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:19.746150 kubelet[2198]: E1002 18:52:19.746051 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:20.746881 kubelet[2198]: E1002 18:52:20.746834 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:21.748455 kubelet[2198]: E1002 18:52:21.748388 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:22.748654 kubelet[2198]: E1002 18:52:22.748579 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:22.903125 kubelet[2198]: E1002 18:52:22.903070 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:23.749090 kubelet[2198]: E1002 18:52:23.749014 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:24.749451 kubelet[2198]: E1002 18:52:24.749385 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:25.749814 kubelet[2198]: E1002 18:52:25.749769 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:26.196531 kubelet[2198]: E1002 18:52:26.196476 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:52:26.750895 kubelet[2198]: E1002 18:52:26.750847 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:27.704743 kubelet[2198]: E1002 18:52:27.704670 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:27.751759 kubelet[2198]: E1002 18:52:27.751724 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:27.903968 kubelet[2198]: E1002 18:52:27.903931 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:28.753365 kubelet[2198]: E1002 18:52:28.753290 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:29.753883 kubelet[2198]: E1002 18:52:29.753779 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:30.754764 kubelet[2198]: E1002 18:52:30.754718 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:31.756060 kubelet[2198]: E1002 18:52:31.755999 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:32.756956 kubelet[2198]: E1002 18:52:32.756881 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:32.906020 kubelet[2198]: E1002 18:52:32.905986 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:33.757553 kubelet[2198]: E1002 18:52:33.757490 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:34.758025 kubelet[2198]: E1002 18:52:34.757973 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:35.759869 kubelet[2198]: E1002 18:52:35.759790 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:36.761161 kubelet[2198]: E1002 18:52:36.761087 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:37.762394 kubelet[2198]: E1002 18:52:37.762272 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:37.908226 kubelet[2198]: E1002 18:52:37.908163 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:38.763816 kubelet[2198]: E1002 18:52:38.763768 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:39.764882 kubelet[2198]: E1002 18:52:39.764831 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:40.766265 kubelet[2198]: E1002 18:52:40.766184 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:41.199672 env[1730]: time="2023-10-02T18:52:41.199620361Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 18:52:41.219328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122218972.mount: Deactivated successfully. Oct 2 18:52:41.226612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116968473.mount: Deactivated successfully. Oct 2 18:52:41.232693 env[1730]: time="2023-10-02T18:52:41.232610212Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\"" Oct 2 18:52:41.234013 env[1730]: time="2023-10-02T18:52:41.233949424Z" level=info msg="StartContainer for \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\"" Oct 2 18:52:41.281107 systemd[1]: Started cri-containerd-2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f.scope. Oct 2 18:52:41.318630 systemd[1]: cri-containerd-2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f.scope: Deactivated successfully. Oct 2 18:52:41.339732 env[1730]: time="2023-10-02T18:52:41.339615796Z" level=info msg="shim disconnected" id=2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f Oct 2 18:52:41.339732 env[1730]: time="2023-10-02T18:52:41.339720715Z" level=warning msg="cleaning up after shim disconnected" id=2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f namespace=k8s.io Oct 2 18:52:41.340093 env[1730]: time="2023-10-02T18:52:41.339743311Z" level=info msg="cleaning up dead shim" Oct 2 18:52:41.366179 env[1730]: time="2023-10-02T18:52:41.366088067Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:52:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2848 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:52:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:52:41.366690 env[1730]: time="2023-10-02T18:52:41.366584029Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 18:52:41.367050 env[1730]: time="2023-10-02T18:52:41.366991644Z" level=error msg="Failed to pipe stdout of container \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\"" error="reading from a closed fifo" Oct 2 18:52:41.367255 env[1730]: time="2023-10-02T18:52:41.367163596Z" level=error msg="Failed to pipe stderr of container \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\"" error="reading from a closed fifo" Oct 2 18:52:41.372629 env[1730]: time="2023-10-02T18:52:41.372544578Z" level=error msg="StartContainer for \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:52:41.372927 kubelet[2198]: E1002 18:52:41.372864 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f" Oct 2 18:52:41.373100 kubelet[2198]: E1002 18:52:41.373018 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:52:41.373100 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:52:41.373100 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:52:41.373100 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lgzrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:52:41.373513 kubelet[2198]: E1002 18:52:41.373079 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:52:41.383405 kubelet[2198]: I1002 18:52:41.382639 2198 scope.go:115] "RemoveContainer" containerID="227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c" Oct 2 18:52:41.383646 kubelet[2198]: I1002 18:52:41.383373 2198 scope.go:115] "RemoveContainer" containerID="227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c" Oct 2 18:52:41.385497 env[1730]: time="2023-10-02T18:52:41.385435242Z" level=info msg="RemoveContainer for \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\"" Oct 2 18:52:41.386386 env[1730]: time="2023-10-02T18:52:41.386341410Z" level=info msg="RemoveContainer for \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\"" Oct 2 18:52:41.386746 env[1730]: time="2023-10-02T18:52:41.386678080Z" level=error msg="RemoveContainer for \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\" failed" error="failed to set removing state for container \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\": container is already in removing state" Oct 2 18:52:41.387720 kubelet[2198]: E1002 18:52:41.387389 2198 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\": container is already in removing state" containerID="227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c" Oct 2 18:52:41.387720 kubelet[2198]: E1002 18:52:41.387464 2198 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c": container is already in removing state; Skipping pod "cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)" Oct 2 18:52:41.388350 kubelet[2198]: E1002 18:52:41.388320 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:52:41.391854 env[1730]: time="2023-10-02T18:52:41.391788514Z" level=info msg="RemoveContainer for \"227d787661fc6e7c19cd5c48546366b17418d954b23c34b252ad4417691ca65c\" returns successfully" Oct 2 18:52:41.767212 kubelet[2198]: E1002 18:52:41.767129 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:42.211482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f-rootfs.mount: Deactivated successfully. Oct 2 18:52:42.767910 kubelet[2198]: E1002 18:52:42.767866 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:42.909573 kubelet[2198]: E1002 18:52:42.909505 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:43.769394 kubelet[2198]: E1002 18:52:43.769331 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:44.444740 kubelet[2198]: W1002 18:52:44.444664 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8841f515_58a8_4e3a_8730_62a6b2acec2c.slice/cri-containerd-2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f.scope WatchSource:0}: task 2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f not found: not found Oct 2 18:52:44.769828 kubelet[2198]: E1002 18:52:44.769703 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:45.771183 kubelet[2198]: E1002 18:52:45.771110 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:46.771992 kubelet[2198]: E1002 18:52:46.771941 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:47.705077 kubelet[2198]: E1002 18:52:47.705035 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:47.773580 kubelet[2198]: E1002 18:52:47.773542 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:47.910469 kubelet[2198]: E1002 18:52:47.910425 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:48.774576 kubelet[2198]: E1002 18:52:48.774503 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:49.775683 kubelet[2198]: E1002 18:52:49.775607 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:50.776025 kubelet[2198]: E1002 18:52:50.775977 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:51.776859 kubelet[2198]: E1002 18:52:51.776788 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:52.777340 kubelet[2198]: E1002 18:52:52.777291 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:52.912251 kubelet[2198]: E1002 18:52:52.912218 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:53.196712 kubelet[2198]: E1002 18:52:53.196675 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:52:53.778505 kubelet[2198]: E1002 18:52:53.778462 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:54.779848 kubelet[2198]: E1002 18:52:54.779804 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:55.781464 kubelet[2198]: E1002 18:52:55.781399 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:56.782604 kubelet[2198]: E1002 18:52:56.782560 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:57.783480 kubelet[2198]: E1002 18:52:57.783417 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:57.913936 kubelet[2198]: E1002 18:52:57.913881 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:52:58.783942 kubelet[2198]: E1002 18:52:58.783869 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:52:59.784213 kubelet[2198]: E1002 18:52:59.784144 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:00.785619 kubelet[2198]: E1002 18:53:00.785547 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:01.786351 kubelet[2198]: E1002 18:53:01.786279 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:02.786640 kubelet[2198]: E1002 18:53:02.786589 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:02.914885 kubelet[2198]: E1002 18:53:02.914855 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:03.787476 kubelet[2198]: E1002 18:53:03.787433 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:04.789150 kubelet[2198]: E1002 18:53:04.789106 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:05.196804 kubelet[2198]: E1002 18:53:05.196413 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:53:05.790420 kubelet[2198]: E1002 18:53:05.790356 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:06.790539 kubelet[2198]: E1002 18:53:06.790473 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:07.704819 kubelet[2198]: E1002 18:53:07.704757 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:07.791295 kubelet[2198]: E1002 18:53:07.791228 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:07.916519 kubelet[2198]: E1002 18:53:07.916486 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:08.792036 kubelet[2198]: E1002 18:53:08.791991 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:09.793411 kubelet[2198]: E1002 18:53:09.793344 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:10.794169 kubelet[2198]: E1002 18:53:10.794105 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:11.794763 kubelet[2198]: E1002 18:53:11.794698 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:12.795028 kubelet[2198]: E1002 18:53:12.794964 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:12.918610 kubelet[2198]: E1002 18:53:12.918558 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:13.795485 kubelet[2198]: E1002 18:53:13.795442 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:14.796987 kubelet[2198]: E1002 18:53:14.796919 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:15.797420 kubelet[2198]: E1002 18:53:15.797342 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:16.797853 kubelet[2198]: E1002 18:53:16.797809 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:17.798982 kubelet[2198]: E1002 18:53:17.798939 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:17.919211 kubelet[2198]: E1002 18:53:17.919146 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:18.197722 kubelet[2198]: E1002 18:53:18.197288 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:53:18.800369 kubelet[2198]: E1002 18:53:18.800324 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:19.801366 kubelet[2198]: E1002 18:53:19.801289 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:20.801846 kubelet[2198]: E1002 18:53:20.801782 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:21.802848 kubelet[2198]: E1002 18:53:21.802787 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:22.803722 kubelet[2198]: E1002 18:53:22.803652 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:22.920262 kubelet[2198]: E1002 18:53:22.920219 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:23.804270 kubelet[2198]: E1002 18:53:23.804222 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:24.805375 kubelet[2198]: E1002 18:53:24.805312 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:25.805944 kubelet[2198]: E1002 18:53:25.805881 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:26.806359 kubelet[2198]: E1002 18:53:26.806308 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:27.705061 kubelet[2198]: E1002 18:53:27.704999 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:27.807062 kubelet[2198]: E1002 18:53:27.806998 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:27.921710 kubelet[2198]: E1002 18:53:27.921657 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:28.808179 kubelet[2198]: E1002 18:53:28.808117 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:29.808465 kubelet[2198]: E1002 18:53:29.808403 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:30.809040 kubelet[2198]: E1002 18:53:30.809003 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:31.810312 kubelet[2198]: E1002 18:53:31.810245 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:32.200440 env[1730]: time="2023-10-02T18:53:32.199933973Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 18:53:32.216711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389408929.mount: Deactivated successfully. Oct 2 18:53:32.227016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841303221.mount: Deactivated successfully. Oct 2 18:53:32.234029 env[1730]: time="2023-10-02T18:53:32.233944059Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\"" Oct 2 18:53:32.235385 env[1730]: time="2023-10-02T18:53:32.235324481Z" level=info msg="StartContainer for \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\"" Oct 2 18:53:32.281450 systemd[1]: Started cri-containerd-959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a.scope. Oct 2 18:53:32.317774 systemd[1]: cri-containerd-959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a.scope: Deactivated successfully. Oct 2 18:53:32.342271 env[1730]: time="2023-10-02T18:53:32.342152555Z" level=info msg="shim disconnected" id=959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a Oct 2 18:53:32.342600 env[1730]: time="2023-10-02T18:53:32.342566259Z" level=warning msg="cleaning up after shim disconnected" id=959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a namespace=k8s.io Oct 2 18:53:32.342723 env[1730]: time="2023-10-02T18:53:32.342695237Z" level=info msg="cleaning up dead shim" Oct 2 18:53:32.370678 env[1730]: time="2023-10-02T18:53:32.370614948Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:53:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2891 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:53:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:53:32.371371 env[1730]: time="2023-10-02T18:53:32.371295187Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 18:53:32.374342 env[1730]: time="2023-10-02T18:53:32.374282918Z" level=error msg="Failed to pipe stderr of container \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\"" error="reading from a closed fifo" Oct 2 18:53:32.374508 env[1730]: time="2023-10-02T18:53:32.374305970Z" level=error msg="Failed to pipe stdout of container \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\"" error="reading from a closed fifo" Oct 2 18:53:32.377051 env[1730]: time="2023-10-02T18:53:32.376941173Z" level=error msg="StartContainer for \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:53:32.378027 kubelet[2198]: E1002 18:53:32.377390 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a" Oct 2 18:53:32.378027 kubelet[2198]: E1002 18:53:32.377515 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:53:32.378027 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:53:32.378027 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:53:32.378400 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lgzrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:53:32.378515 kubelet[2198]: E1002 18:53:32.377572 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:53:32.487955 kubelet[2198]: I1002 18:53:32.487830 2198 scope.go:115] "RemoveContainer" containerID="2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f" Oct 2 18:53:32.489941 kubelet[2198]: I1002 18:53:32.489725 2198 scope.go:115] "RemoveContainer" containerID="2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f" Oct 2 18:53:32.492713 env[1730]: time="2023-10-02T18:53:32.492428799Z" level=info msg="RemoveContainer for \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\"" Oct 2 18:53:32.494892 env[1730]: time="2023-10-02T18:53:32.494816319Z" level=info msg="RemoveContainer for \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\"" Oct 2 18:53:32.495427 env[1730]: time="2023-10-02T18:53:32.495266696Z" level=error msg="RemoveContainer for \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\" failed" error="failed to set removing state for container \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\": container is already in removing state" Oct 2 18:53:32.496226 kubelet[2198]: E1002 18:53:32.495838 2198 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\": container is already in removing state" containerID="2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f" Oct 2 18:53:32.496226 kubelet[2198]: E1002 18:53:32.495924 2198 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f": container is already in removing state; Skipping pod "cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)" Oct 2 18:53:32.497065 kubelet[2198]: E1002 18:53:32.496676 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:53:32.499814 env[1730]: time="2023-10-02T18:53:32.499717433Z" level=info msg="RemoveContainer for \"2d50f96ef688cad388b3a3a8f7b6de2284a90ccfc81c2b5f31cb55787039536f\" returns successfully" Oct 2 18:53:32.810724 kubelet[2198]: E1002 18:53:32.810665 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:32.923595 kubelet[2198]: E1002 18:53:32.923495 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:33.212066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a-rootfs.mount: Deactivated successfully. Oct 2 18:53:33.811339 kubelet[2198]: E1002 18:53:33.811274 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:34.811898 kubelet[2198]: E1002 18:53:34.811845 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:35.448793 kubelet[2198]: W1002 18:53:35.448740 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8841f515_58a8_4e3a_8730_62a6b2acec2c.slice/cri-containerd-959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a.scope WatchSource:0}: task 959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a not found: not found Oct 2 18:53:35.812435 kubelet[2198]: E1002 18:53:35.812380 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:36.812964 kubelet[2198]: E1002 18:53:36.812898 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:37.813516 kubelet[2198]: E1002 18:53:37.813453 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:37.924213 kubelet[2198]: E1002 18:53:37.924164 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:38.814476 kubelet[2198]: E1002 18:53:38.814414 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:39.814799 kubelet[2198]: E1002 18:53:39.814713 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:40.815109 kubelet[2198]: E1002 18:53:40.815046 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:41.816123 kubelet[2198]: E1002 18:53:41.816067 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:42.816979 kubelet[2198]: E1002 18:53:42.816923 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:42.925533 kubelet[2198]: E1002 18:53:42.925487 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:43.817875 kubelet[2198]: E1002 18:53:43.817817 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:44.197067 kubelet[2198]: E1002 18:53:44.196780 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:53:44.818935 kubelet[2198]: E1002 18:53:44.818872 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:45.819324 kubelet[2198]: E1002 18:53:45.819260 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:46.820046 kubelet[2198]: E1002 18:53:46.819968 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:47.705117 kubelet[2198]: E1002 18:53:47.705061 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:47.820871 kubelet[2198]: E1002 18:53:47.820812 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:47.926698 kubelet[2198]: E1002 18:53:47.926647 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:48.821889 kubelet[2198]: E1002 18:53:48.821821 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:49.822880 kubelet[2198]: E1002 18:53:49.822843 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:50.824165 kubelet[2198]: E1002 18:53:50.824108 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:51.824306 kubelet[2198]: E1002 18:53:51.824249 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:52.824437 kubelet[2198]: E1002 18:53:52.824373 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:52.927882 kubelet[2198]: E1002 18:53:52.927838 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:53.825128 kubelet[2198]: E1002 18:53:53.825075 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:54.825485 kubelet[2198]: E1002 18:53:54.825429 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:55.825784 kubelet[2198]: E1002 18:53:55.825742 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:56.827368 kubelet[2198]: E1002 18:53:56.827332 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:57.828392 kubelet[2198]: E1002 18:53:57.828359 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:57.929144 kubelet[2198]: E1002 18:53:57.929095 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:53:58.829704 kubelet[2198]: E1002 18:53:58.829638 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:53:59.197149 kubelet[2198]: E1002 18:53:59.196773 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:53:59.830446 kubelet[2198]: E1002 18:53:59.830384 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:00.831128 kubelet[2198]: E1002 18:54:00.831067 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:01.832058 kubelet[2198]: E1002 18:54:01.832010 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:02.833594 kubelet[2198]: E1002 18:54:02.833530 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:02.930827 kubelet[2198]: E1002 18:54:02.930782 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:03.834651 kubelet[2198]: E1002 18:54:03.834596 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:04.835311 kubelet[2198]: E1002 18:54:04.835250 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:05.835939 kubelet[2198]: E1002 18:54:05.835875 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:06.836680 kubelet[2198]: E1002 18:54:06.836643 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:07.705269 kubelet[2198]: E1002 18:54:07.705151 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:07.838095 kubelet[2198]: E1002 18:54:07.838063 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:07.931886 kubelet[2198]: E1002 18:54:07.931842 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:08.839028 kubelet[2198]: E1002 18:54:08.838971 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:09.839562 kubelet[2198]: E1002 18:54:09.839527 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:10.196895 kubelet[2198]: E1002 18:54:10.196296 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:54:10.840474 kubelet[2198]: E1002 18:54:10.840439 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:11.841347 kubelet[2198]: E1002 18:54:11.841311 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:12.842782 kubelet[2198]: E1002 18:54:12.842716 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:12.933005 kubelet[2198]: E1002 18:54:12.932975 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:13.843330 kubelet[2198]: E1002 18:54:13.843265 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:14.844138 kubelet[2198]: E1002 18:54:14.844071 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:15.844731 kubelet[2198]: E1002 18:54:15.844691 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:16.846219 kubelet[2198]: E1002 18:54:16.846151 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:17.846898 kubelet[2198]: E1002 18:54:17.846838 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:17.934857 kubelet[2198]: E1002 18:54:17.934804 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:18.847882 kubelet[2198]: E1002 18:54:18.847826 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:19.848948 kubelet[2198]: E1002 18:54:19.848886 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:20.849660 kubelet[2198]: E1002 18:54:20.849605 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:21.849920 kubelet[2198]: E1002 18:54:21.849883 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:22.851617 kubelet[2198]: E1002 18:54:22.851557 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:22.935721 kubelet[2198]: E1002 18:54:22.935684 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:23.851951 kubelet[2198]: E1002 18:54:23.851888 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:24.852585 kubelet[2198]: E1002 18:54:24.852524 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:25.196830 kubelet[2198]: E1002 18:54:25.196448 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:54:25.852685 kubelet[2198]: E1002 18:54:25.852647 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:26.854074 kubelet[2198]: E1002 18:54:26.854038 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:27.704613 kubelet[2198]: E1002 18:54:27.704554 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:27.855247 kubelet[2198]: E1002 18:54:27.855209 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:27.937263 kubelet[2198]: E1002 18:54:27.937208 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:28.856207 kubelet[2198]: E1002 18:54:28.856150 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:29.857331 kubelet[2198]: E1002 18:54:29.857295 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:30.858405 kubelet[2198]: E1002 18:54:30.858367 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:31.859185 kubelet[2198]: E1002 18:54:31.859130 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:32.860084 kubelet[2198]: E1002 18:54:32.860028 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:32.938347 kubelet[2198]: E1002 18:54:32.938298 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:33.861221 kubelet[2198]: E1002 18:54:33.861166 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:34.862852 kubelet[2198]: E1002 18:54:34.862814 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:35.863612 kubelet[2198]: E1002 18:54:35.863573 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:36.865423 kubelet[2198]: E1002 18:54:36.865356 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:37.865825 kubelet[2198]: E1002 18:54:37.865790 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:37.938906 kubelet[2198]: E1002 18:54:37.938865 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:38.196655 kubelet[2198]: E1002 18:54:38.196180 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:54:38.867042 kubelet[2198]: E1002 18:54:38.867005 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:39.867930 kubelet[2198]: E1002 18:54:39.867883 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:40.869048 kubelet[2198]: E1002 18:54:40.868987 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:41.870217 kubelet[2198]: E1002 18:54:41.870134 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:42.870455 kubelet[2198]: E1002 18:54:42.870391 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:42.940485 kubelet[2198]: E1002 18:54:42.940440 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:43.871100 kubelet[2198]: E1002 18:54:43.871027 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:44.872332 kubelet[2198]: E1002 18:54:44.872293 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:45.873508 kubelet[2198]: E1002 18:54:45.873472 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:46.874720 kubelet[2198]: E1002 18:54:46.874678 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:47.704796 kubelet[2198]: E1002 18:54:47.704742 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:47.875587 kubelet[2198]: E1002 18:54:47.875531 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:47.941235 kubelet[2198]: E1002 18:54:47.941179 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:48.875875 kubelet[2198]: E1002 18:54:48.875839 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:49.877086 kubelet[2198]: E1002 18:54:49.877026 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:50.877970 kubelet[2198]: E1002 18:54:50.877909 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:51.878775 kubelet[2198]: E1002 18:54:51.878708 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:52.879751 kubelet[2198]: E1002 18:54:52.879709 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:52.942687 kubelet[2198]: E1002 18:54:52.942650 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:53.200086 env[1730]: time="2023-10-02T18:54:53.199561083Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 18:54:53.222400 env[1730]: time="2023-10-02T18:54:53.222318637Z" level=info msg="CreateContainer within sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3\"" Oct 2 18:54:53.223296 env[1730]: time="2023-10-02T18:54:53.223153438Z" level=info msg="StartContainer for \"a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3\"" Oct 2 18:54:53.270339 systemd[1]: Started cri-containerd-a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3.scope. Oct 2 18:54:53.284390 systemd[1]: run-containerd-runc-k8s.io-a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3-runc.rdmXcA.mount: Deactivated successfully. Oct 2 18:54:53.314976 systemd[1]: cri-containerd-a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3.scope: Deactivated successfully. Oct 2 18:54:53.332481 env[1730]: time="2023-10-02T18:54:53.332396147Z" level=info msg="shim disconnected" id=a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3 Oct 2 18:54:53.332481 env[1730]: time="2023-10-02T18:54:53.332474135Z" level=warning msg="cleaning up after shim disconnected" id=a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3 namespace=k8s.io Oct 2 18:54:53.332843 env[1730]: time="2023-10-02T18:54:53.332498855Z" level=info msg="cleaning up dead shim" Oct 2 18:54:53.357399 env[1730]: time="2023-10-02T18:54:53.357326115Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:54:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2938 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:54:53Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:54:53.357848 env[1730]: time="2023-10-02T18:54:53.357766094Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 18:54:53.358339 env[1730]: time="2023-10-02T18:54:53.358283809Z" level=error msg="Failed to pipe stdout of container \"a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3\"" error="reading from a closed fifo" Oct 2 18:54:53.360367 env[1730]: time="2023-10-02T18:54:53.360299911Z" level=error msg="Failed to pipe stderr of container \"a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3\"" error="reading from a closed fifo" Oct 2 18:54:53.365835 env[1730]: time="2023-10-02T18:54:53.365771176Z" level=error msg="StartContainer for \"a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:54:53.366891 kubelet[2198]: E1002 18:54:53.366276 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3" Oct 2 18:54:53.366891 kubelet[2198]: E1002 18:54:53.366668 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:54:53.366891 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:54:53.366891 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:54:53.367285 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lgzrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:54:53.367401 kubelet[2198]: E1002 18:54:53.366837 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:54:53.657139 kubelet[2198]: I1002 18:54:53.656396 2198 scope.go:115] "RemoveContainer" containerID="959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a" Oct 2 18:54:53.657139 kubelet[2198]: I1002 18:54:53.656866 2198 scope.go:115] "RemoveContainer" containerID="959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a" Oct 2 18:54:53.659496 env[1730]: time="2023-10-02T18:54:53.659445916Z" level=info msg="RemoveContainer for \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\"" Oct 2 18:54:53.660839 env[1730]: time="2023-10-02T18:54:53.660621769Z" level=info msg="RemoveContainer for \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\"" Oct 2 18:54:53.663277 env[1730]: time="2023-10-02T18:54:53.660917376Z" level=error msg="RemoveContainer for \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\" failed" error="failed to set removing state for container \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\": container is already in removing state" Oct 2 18:54:53.663681 kubelet[2198]: E1002 18:54:53.663642 2198 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\": container is already in removing state" containerID="959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a" Oct 2 18:54:53.663805 kubelet[2198]: E1002 18:54:53.663723 2198 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a": container is already in removing state; Skipping pod "cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)" Oct 2 18:54:53.664546 kubelet[2198]: E1002 18:54:53.664376 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-54x2v_kube-system(8841f515-58a8-4e3a-8730-62a6b2acec2c)\"" pod="kube-system/cilium-54x2v" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c Oct 2 18:54:53.665465 env[1730]: time="2023-10-02T18:54:53.665414424Z" level=info msg="RemoveContainer for \"959e639ad39b854f7f2967ed4114223df27678e91e15b5ad0a696ec702cc1e0a\" returns successfully" Oct 2 18:54:53.881317 kubelet[2198]: E1002 18:54:53.881261 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:54.211347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3-rootfs.mount: Deactivated successfully. Oct 2 18:54:54.882030 kubelet[2198]: E1002 18:54:54.881995 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:55.883523 kubelet[2198]: E1002 18:54:55.883488 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:56.178947 env[1730]: time="2023-10-02T18:54:56.176277437Z" level=info msg="StopPodSandbox for \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\"" Oct 2 18:54:56.178947 env[1730]: time="2023-10-02T18:54:56.176375621Z" level=info msg="Container to stop \"a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 18:54:56.178678 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458-shm.mount: Deactivated successfully. Oct 2 18:54:56.196464 systemd[1]: cri-containerd-3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458.scope: Deactivated successfully. Oct 2 18:54:56.195000 audit: BPF prog-id=77 op=UNLOAD Oct 2 18:54:56.201602 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 18:54:56.201740 kernel: audit: type=1334 audit(1696272896.195:725): prog-id=77 op=UNLOAD Oct 2 18:54:56.202000 audit: BPF prog-id=80 op=UNLOAD Oct 2 18:54:56.208227 kernel: audit: type=1334 audit(1696272896.202:726): prog-id=80 op=UNLOAD Oct 2 18:54:56.252438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458-rootfs.mount: Deactivated successfully. Oct 2 18:54:56.272504 env[1730]: time="2023-10-02T18:54:56.272424998Z" level=info msg="shim disconnected" id=3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458 Oct 2 18:54:56.272504 env[1730]: time="2023-10-02T18:54:56.272500946Z" level=warning msg="cleaning up after shim disconnected" id=3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458 namespace=k8s.io Oct 2 18:54:56.272844 env[1730]: time="2023-10-02T18:54:56.272524166Z" level=info msg="cleaning up dead shim" Oct 2 18:54:56.298550 env[1730]: time="2023-10-02T18:54:56.298473048Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:54:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2968 runtime=io.containerd.runc.v2\n" Oct 2 18:54:56.299056 env[1730]: time="2023-10-02T18:54:56.299008991Z" level=info msg="TearDown network for sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" successfully" Oct 2 18:54:56.299146 env[1730]: time="2023-10-02T18:54:56.299056174Z" level=info msg="StopPodSandbox for \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" returns successfully" Oct 2 18:54:56.403591 kubelet[2198]: I1002 18:54:56.403546 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-run\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.403829 kubelet[2198]: I1002 18:54:56.403700 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cni-path\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.403829 kubelet[2198]: I1002 18:54:56.403630 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:56.403829 kubelet[2198]: I1002 18:54:56.403819 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-lib-modules\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.404026 kubelet[2198]: I1002 18:54:56.403870 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cni-path" (OuterVolumeSpecName: "cni-path") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:56.404026 kubelet[2198]: I1002 18:54:56.403936 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8841f515-58a8-4e3a-8730-62a6b2acec2c-hubble-tls\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.404026 kubelet[2198]: I1002 18:54:56.403986 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:56.404259 kubelet[2198]: I1002 18:54:56.404049 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-host-proc-sys-kernel\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.404780 kubelet[2198]: I1002 18:54:56.404590 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-etc-cni-netd\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.404780 kubelet[2198]: I1002 18:54:56.404656 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-config-path\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.404780 kubelet[2198]: I1002 18:54:56.404701 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-cgroup\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.405016 kubelet[2198]: I1002 18:54:56.404797 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgzrc\" (UniqueName: \"kubernetes.io/projected/8841f515-58a8-4e3a-8730-62a6b2acec2c-kube-api-access-lgzrc\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.405016 kubelet[2198]: I1002 18:54:56.404840 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-host-proc-sys-net\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.405016 kubelet[2198]: I1002 18:54:56.404878 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-bpf-maps\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.405016 kubelet[2198]: I1002 18:54:56.404927 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-hostproc\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.405016 kubelet[2198]: I1002 18:54:56.404972 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8841f515-58a8-4e3a-8730-62a6b2acec2c-clustermesh-secrets\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.405016 kubelet[2198]: I1002 18:54:56.405011 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-xtables-lock\") pod \"8841f515-58a8-4e3a-8730-62a6b2acec2c\" (UID: \"8841f515-58a8-4e3a-8730-62a6b2acec2c\") " Oct 2 18:54:56.405410 kubelet[2198]: I1002 18:54:56.405051 2198 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-run\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.405410 kubelet[2198]: I1002 18:54:56.405077 2198 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cni-path\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.405410 kubelet[2198]: I1002 18:54:56.405100 2198 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-lib-modules\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.405410 kubelet[2198]: I1002 18:54:56.405136 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:56.405410 kubelet[2198]: I1002 18:54:56.405180 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:56.405410 kubelet[2198]: I1002 18:54:56.405244 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:56.405763 kubelet[2198]: W1002 18:54:56.405478 2198 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/8841f515-58a8-4e3a-8730-62a6b2acec2c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 18:54:56.406905 kubelet[2198]: I1002 18:54:56.406038 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:56.406905 kubelet[2198]: I1002 18:54:56.406108 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:56.409268 kubelet[2198]: I1002 18:54:56.409225 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-hostproc" (OuterVolumeSpecName: "hostproc") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:56.409500 kubelet[2198]: I1002 18:54:56.409461 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:56.410276 kubelet[2198]: I1002 18:54:56.410225 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 18:54:56.421681 systemd[1]: var-lib-kubelet-pods-8841f515\x2d58a8\x2d4e3a\x2d8730\x2d62a6b2acec2c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 18:54:56.429076 kubelet[2198]: I1002 18:54:56.427082 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8841f515-58a8-4e3a-8730-62a6b2acec2c-kube-api-access-lgzrc" (OuterVolumeSpecName: "kube-api-access-lgzrc") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "kube-api-access-lgzrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 18:54:56.429076 kubelet[2198]: I1002 18:54:56.427332 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8841f515-58a8-4e3a-8730-62a6b2acec2c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 18:54:56.429076 kubelet[2198]: I1002 18:54:56.427473 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8841f515-58a8-4e3a-8730-62a6b2acec2c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8841f515-58a8-4e3a-8730-62a6b2acec2c" (UID: "8841f515-58a8-4e3a-8730-62a6b2acec2c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 18:54:56.428083 systemd[1]: var-lib-kubelet-pods-8841f515\x2d58a8\x2d4e3a\x2d8730\x2d62a6b2acec2c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 18:54:56.430908 systemd[1]: var-lib-kubelet-pods-8841f515\x2d58a8\x2d4e3a\x2d8730\x2d62a6b2acec2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlgzrc.mount: Deactivated successfully. Oct 2 18:54:56.438275 kubelet[2198]: W1002 18:54:56.438230 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8841f515_58a8_4e3a_8730_62a6b2acec2c.slice/cri-containerd-a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3.scope WatchSource:0}: task a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3 not found: not found Oct 2 18:54:56.505541 kubelet[2198]: I1002 18:54:56.505472 2198 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-host-proc-sys-kernel\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.505541 kubelet[2198]: I1002 18:54:56.505539 2198 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-etc-cni-netd\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.505771 kubelet[2198]: I1002 18:54:56.505570 2198 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-config-path\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.505771 kubelet[2198]: I1002 18:54:56.505594 2198 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-cilium-cgroup\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.505771 kubelet[2198]: I1002 18:54:56.505621 2198 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8841f515-58a8-4e3a-8730-62a6b2acec2c-hubble-tls\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.505771 kubelet[2198]: I1002 18:54:56.505645 2198 reconciler.go:399] "Volume detached for volume \"kube-api-access-lgzrc\" (UniqueName: \"kubernetes.io/projected/8841f515-58a8-4e3a-8730-62a6b2acec2c-kube-api-access-lgzrc\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.505771 kubelet[2198]: I1002 18:54:56.505668 2198 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-host-proc-sys-net\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.505771 kubelet[2198]: I1002 18:54:56.505690 2198 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-bpf-maps\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.505771 kubelet[2198]: I1002 18:54:56.505712 2198 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-hostproc\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.505771 kubelet[2198]: I1002 18:54:56.505735 2198 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8841f515-58a8-4e3a-8730-62a6b2acec2c-clustermesh-secrets\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.506286 kubelet[2198]: I1002 18:54:56.505758 2198 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8841f515-58a8-4e3a-8730-62a6b2acec2c-xtables-lock\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:56.666629 kubelet[2198]: I1002 18:54:56.666583 2198 scope.go:115] "RemoveContainer" containerID="a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3" Oct 2 18:54:56.671930 env[1730]: time="2023-10-02T18:54:56.671492260Z" level=info msg="RemoveContainer for \"a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3\"" Oct 2 18:54:56.675698 systemd[1]: Removed slice kubepods-burstable-pod8841f515_58a8_4e3a_8730_62a6b2acec2c.slice. Oct 2 18:54:56.678021 env[1730]: time="2023-10-02T18:54:56.677965969Z" level=info msg="RemoveContainer for \"a2f72ff1dc755f74be8e811c57fe547aa84a78818cbdcc36d83fcb2019ee13d3\" returns successfully" Oct 2 18:54:56.739004 kubelet[2198]: I1002 18:54:56.738856 2198 topology_manager.go:205] "Topology Admit Handler" Oct 2 18:54:56.739004 kubelet[2198]: E1002 18:54:56.738949 2198 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.739004 kubelet[2198]: E1002 18:54:56.738972 2198 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.740163 kubelet[2198]: E1002 18:54:56.740127 2198 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.740304 kubelet[2198]: E1002 18:54:56.740174 2198 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.740304 kubelet[2198]: E1002 18:54:56.740226 2198 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.740304 kubelet[2198]: I1002 18:54:56.740272 2198 memory_manager.go:345] "RemoveStaleState removing state" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.740486 kubelet[2198]: I1002 18:54:56.740316 2198 memory_manager.go:345] "RemoveStaleState removing state" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.740486 kubelet[2198]: I1002 18:54:56.740332 2198 memory_manager.go:345] "RemoveStaleState removing state" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.740486 kubelet[2198]: I1002 18:54:56.740348 2198 memory_manager.go:345] "RemoveStaleState removing state" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.740486 kubelet[2198]: I1002 18:54:56.740386 2198 memory_manager.go:345] "RemoveStaleState removing state" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.740486 kubelet[2198]: I1002 18:54:56.740406 2198 memory_manager.go:345] "RemoveStaleState removing state" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.740486 kubelet[2198]: E1002 18:54:56.740443 2198 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="8841f515-58a8-4e3a-8730-62a6b2acec2c" containerName="mount-cgroup" Oct 2 18:54:56.750054 systemd[1]: Created slice kubepods-burstable-podecc2dca5_965b_4e8e_928e_51ef9a8dd153.slice. Oct 2 18:54:56.885025 kubelet[2198]: E1002 18:54:56.884946 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:56.907962 kubelet[2198]: I1002 18:54:56.907932 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-clustermesh-secrets\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.908162 kubelet[2198]: I1002 18:54:56.908139 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-host-proc-sys-net\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.908376 kubelet[2198]: I1002 18:54:56.908355 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-bpf-maps\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.908537 kubelet[2198]: I1002 18:54:56.908515 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-hostproc\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.908697 kubelet[2198]: I1002 18:54:56.908676 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-config-path\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.908850 kubelet[2198]: I1002 18:54:56.908829 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-etc-cni-netd\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.909012 kubelet[2198]: I1002 18:54:56.908991 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-lib-modules\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.909179 kubelet[2198]: I1002 18:54:56.909158 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-xtables-lock\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.909377 kubelet[2198]: I1002 18:54:56.909356 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcvlc\" (UniqueName: \"kubernetes.io/projected/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-kube-api-access-fcvlc\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.909549 kubelet[2198]: I1002 18:54:56.909527 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-run\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.909704 kubelet[2198]: I1002 18:54:56.909681 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cni-path\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.909863 kubelet[2198]: I1002 18:54:56.909843 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-hubble-tls\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.910028 kubelet[2198]: I1002 18:54:56.910008 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-cgroup\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:56.910210 kubelet[2198]: I1002 18:54:56.910167 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-host-proc-sys-kernel\") pod \"cilium-8lx8w\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " pod="kube-system/cilium-8lx8w" Oct 2 18:54:57.060234 env[1730]: time="2023-10-02T18:54:57.059560882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8lx8w,Uid:ecc2dca5-965b-4e8e-928e-51ef9a8dd153,Namespace:kube-system,Attempt:0,}" Oct 2 18:54:57.093025 env[1730]: time="2023-10-02T18:54:57.092874582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 18:54:57.093025 env[1730]: time="2023-10-02T18:54:57.092959998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 18:54:57.093358 env[1730]: time="2023-10-02T18:54:57.092987214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 18:54:57.093623 env[1730]: time="2023-10-02T18:54:57.093507245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab pid=2995 runtime=io.containerd.runc.v2 Oct 2 18:54:57.122884 systemd[1]: Started cri-containerd-f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab.scope. Oct 2 18:54:57.160000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.160000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.178202 kernel: audit: type=1400 audit(1696272897.160:727): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.178306 kernel: audit: type=1400 audit(1696272897.160:728): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.160000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.192990 kernel: audit: type=1400 audit(1696272897.160:729): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.160000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.208426 kernel: audit: type=1400 audit(1696272897.160:730): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.160000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.216511 kernel: audit: type=1400 audit(1696272897.160:731): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.160000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.224650 kernel: audit: type=1400 audit(1696272897.160:732): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.160000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.232724 kernel: audit: type=1400 audit(1696272897.160:733): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.160000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.160000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit: BPF prog-id=84 op=LOAD Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.242257 kernel: audit: type=1400 audit(1696272897.160:734): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2995 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:54:57.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363433316664316635656139356537323166346165663039356630 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2995 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:54:57.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363433316664316635656139356537323166346165663039356630 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.161000 audit: BPF prog-id=85 op=LOAD Oct 2 18:54:57.161000 audit[3006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2995 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:54:57.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363433316664316635656139356537323166346165663039356630 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit: BPF prog-id=86 op=LOAD Oct 2 18:54:57.169000 audit[3006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2995 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:54:57.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363433316664316635656139356537323166346165663039356630 Oct 2 18:54:57.169000 audit: BPF prog-id=86 op=UNLOAD Oct 2 18:54:57.169000 audit: BPF prog-id=85 op=UNLOAD Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { perfmon } for pid=3006 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit[3006]: AVC avc: denied { bpf } for pid=3006 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:54:57.169000 audit: BPF prog-id=87 op=LOAD Oct 2 18:54:57.169000 audit[3006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2995 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:54:57.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363433316664316635656139356537323166346165663039356630 Oct 2 18:54:57.245991 env[1730]: time="2023-10-02T18:54:57.243095614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8lx8w,Uid:ecc2dca5-965b-4e8e-928e-51ef9a8dd153,Namespace:kube-system,Attempt:0,} returns sandbox id \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\"" Oct 2 18:54:57.248613 env[1730]: time="2023-10-02T18:54:57.248540289Z" level=info msg="CreateContainer within sandbox \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 18:54:57.264628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725786171.mount: Deactivated successfully. Oct 2 18:54:57.273783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110999645.mount: Deactivated successfully. Oct 2 18:54:57.283003 env[1730]: time="2023-10-02T18:54:57.282922634Z" level=info msg="CreateContainer within sandbox \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32\"" Oct 2 18:54:57.283813 env[1730]: time="2023-10-02T18:54:57.283766808Z" level=info msg="StartContainer for \"2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32\"" Oct 2 18:54:57.325119 systemd[1]: Started cri-containerd-2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32.scope. Oct 2 18:54:57.365645 systemd[1]: cri-containerd-2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32.scope: Deactivated successfully. Oct 2 18:54:57.398269 env[1730]: time="2023-10-02T18:54:57.398167190Z" level=info msg="shim disconnected" id=2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32 Oct 2 18:54:57.398269 env[1730]: time="2023-10-02T18:54:57.398265170Z" level=warning msg="cleaning up after shim disconnected" id=2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32 namespace=k8s.io Oct 2 18:54:57.398608 env[1730]: time="2023-10-02T18:54:57.398287718Z" level=info msg="cleaning up dead shim" Oct 2 18:54:57.424524 env[1730]: time="2023-10-02T18:54:57.424449878Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:54:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3054 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:54:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:54:57.424997 env[1730]: time="2023-10-02T18:54:57.424909033Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Oct 2 18:54:57.426369 env[1730]: time="2023-10-02T18:54:57.426302998Z" level=error msg="Failed to pipe stdout of container \"2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32\"" error="reading from a closed fifo" Oct 2 18:54:57.426477 env[1730]: time="2023-10-02T18:54:57.426403978Z" level=error msg="Failed to pipe stderr of container \"2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32\"" error="reading from a closed fifo" Oct 2 18:54:57.428788 env[1730]: time="2023-10-02T18:54:57.428707324Z" level=error msg="StartContainer for \"2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:54:57.429095 kubelet[2198]: E1002 18:54:57.429041 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32" Oct 2 18:54:57.429448 kubelet[2198]: E1002 18:54:57.429303 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:54:57.429448 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:54:57.429448 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:54:57.429448 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fcvlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-8lx8w_kube-system(ecc2dca5-965b-4e8e-928e-51ef9a8dd153): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:54:57.429918 kubelet[2198]: E1002 18:54:57.429602 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8lx8w" podUID=ecc2dca5-965b-4e8e-928e-51ef9a8dd153 Oct 2 18:54:57.673528 env[1730]: time="2023-10-02T18:54:57.672554109Z" level=info msg="StopPodSandbox for \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\"" Oct 2 18:54:57.673861 env[1730]: time="2023-10-02T18:54:57.673790107Z" level=info msg="Container to stop \"2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 18:54:57.692087 systemd[1]: cri-containerd-f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab.scope: Deactivated successfully. Oct 2 18:54:57.691000 audit: BPF prog-id=84 op=UNLOAD Oct 2 18:54:57.696000 audit: BPF prog-id=87 op=UNLOAD Oct 2 18:54:57.754942 env[1730]: time="2023-10-02T18:54:57.754846369Z" level=info msg="shim disconnected" id=f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab Oct 2 18:54:57.754942 env[1730]: time="2023-10-02T18:54:57.754930633Z" level=warning msg="cleaning up after shim disconnected" id=f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab namespace=k8s.io Oct 2 18:54:57.755345 env[1730]: time="2023-10-02T18:54:57.754955125Z" level=info msg="cleaning up dead shim" Oct 2 18:54:57.781296 env[1730]: time="2023-10-02T18:54:57.781222284Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:54:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3086 runtime=io.containerd.runc.v2\n" Oct 2 18:54:57.781790 env[1730]: time="2023-10-02T18:54:57.781743587Z" level=info msg="TearDown network for sandbox \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\" successfully" Oct 2 18:54:57.781790 env[1730]: time="2023-10-02T18:54:57.781790015Z" level=info msg="StopPodSandbox for \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\" returns successfully" Oct 2 18:54:57.885909 kubelet[2198]: E1002 18:54:57.885837 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:57.916553 kubelet[2198]: I1002 18:54:57.916508 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-xtables-lock\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.916668 kubelet[2198]: I1002 18:54:57.916588 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcvlc\" (UniqueName: \"kubernetes.io/projected/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-kube-api-access-fcvlc\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.916668 kubelet[2198]: I1002 18:54:57.916634 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-hubble-tls\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.916797 kubelet[2198]: I1002 18:54:57.916674 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-cgroup\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.916797 kubelet[2198]: I1002 18:54:57.916713 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-host-proc-sys-net\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.916797 kubelet[2198]: I1002 18:54:57.916751 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-bpf-maps\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.916979 kubelet[2198]: I1002 18:54:57.916797 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-config-path\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.916979 kubelet[2198]: I1002 18:54:57.916835 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-lib-modules\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.916979 kubelet[2198]: I1002 18:54:57.916873 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-host-proc-sys-kernel\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.916979 kubelet[2198]: I1002 18:54:57.916912 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-hostproc\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.916979 kubelet[2198]: I1002 18:54:57.916952 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-etc-cni-netd\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.917308 kubelet[2198]: I1002 18:54:57.916989 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-run\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.917308 kubelet[2198]: I1002 18:54:57.917058 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-clustermesh-secrets\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.917308 kubelet[2198]: I1002 18:54:57.917096 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cni-path\") pod \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\" (UID: \"ecc2dca5-965b-4e8e-928e-51ef9a8dd153\") " Oct 2 18:54:57.917308 kubelet[2198]: I1002 18:54:57.917161 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cni-path" (OuterVolumeSpecName: "cni-path") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:57.917308 kubelet[2198]: I1002 18:54:57.917242 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:57.917747 kubelet[2198]: I1002 18:54:57.917693 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:57.927227 kubelet[2198]: I1002 18:54:57.925800 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 18:54:57.927464 kubelet[2198]: I1002 18:54:57.925828 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-kube-api-access-fcvlc" (OuterVolumeSpecName: "kube-api-access-fcvlc") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "kube-api-access-fcvlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 18:54:57.927586 kubelet[2198]: I1002 18:54:57.925867 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:57.927697 kubelet[2198]: I1002 18:54:57.925894 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-hostproc" (OuterVolumeSpecName: "hostproc") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:57.927843 kubelet[2198]: I1002 18:54:57.925920 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:57.927958 kubelet[2198]: I1002 18:54:57.925947 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:57.928119 kubelet[2198]: I1002 18:54:57.928086 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:57.928343 kubelet[2198]: I1002 18:54:57.928315 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:57.928506 kubelet[2198]: I1002 18:54:57.928480 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:54:57.928895 kubelet[2198]: W1002 18:54:57.928848 2198 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/ecc2dca5-965b-4e8e-928e-51ef9a8dd153/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 18:54:57.933480 kubelet[2198]: I1002 18:54:57.933422 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 18:54:57.935977 kubelet[2198]: I1002 18:54:57.935922 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ecc2dca5-965b-4e8e-928e-51ef9a8dd153" (UID: "ecc2dca5-965b-4e8e-928e-51ef9a8dd153"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 18:54:57.943615 kubelet[2198]: E1002 18:54:57.943584 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:54:58.017579 kubelet[2198]: I1002 18:54:58.017549 2198 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-bpf-maps\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.017766 kubelet[2198]: I1002 18:54:58.017745 2198 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-config-path\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.017894 kubelet[2198]: I1002 18:54:58.017874 2198 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-lib-modules\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.018018 kubelet[2198]: I1002 18:54:58.017998 2198 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-host-proc-sys-kernel\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.018134 kubelet[2198]: I1002 18:54:58.018114 2198 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-host-proc-sys-net\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.018296 kubelet[2198]: I1002 18:54:58.018275 2198 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-run\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.018428 kubelet[2198]: I1002 18:54:58.018408 2198 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-hostproc\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.018549 kubelet[2198]: I1002 18:54:58.018530 2198 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-etc-cni-netd\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.018670 kubelet[2198]: I1002 18:54:58.018651 2198 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-clustermesh-secrets\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.018791 kubelet[2198]: I1002 18:54:58.018771 2198 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cni-path\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.018912 kubelet[2198]: I1002 18:54:58.018893 2198 reconciler.go:399] "Volume detached for volume \"kube-api-access-fcvlc\" (UniqueName: \"kubernetes.io/projected/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-kube-api-access-fcvlc\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.019036 kubelet[2198]: I1002 18:54:58.019016 2198 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-hubble-tls\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.019162 kubelet[2198]: I1002 18:54:58.019142 2198 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-cilium-cgroup\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.019315 kubelet[2198]: I1002 18:54:58.019295 2198 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecc2dca5-965b-4e8e-928e-51ef9a8dd153-xtables-lock\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:54:58.180425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab-rootfs.mount: Deactivated successfully. Oct 2 18:54:58.180595 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab-shm.mount: Deactivated successfully. Oct 2 18:54:58.180729 systemd[1]: var-lib-kubelet-pods-ecc2dca5\x2d965b\x2d4e8e\x2d928e\x2d51ef9a8dd153-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfcvlc.mount: Deactivated successfully. Oct 2 18:54:58.180867 systemd[1]: var-lib-kubelet-pods-ecc2dca5\x2d965b\x2d4e8e\x2d928e\x2d51ef9a8dd153-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 18:54:58.180996 systemd[1]: var-lib-kubelet-pods-ecc2dca5\x2d965b\x2d4e8e\x2d928e\x2d51ef9a8dd153-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 18:54:58.197960 env[1730]: time="2023-10-02T18:54:58.197890834Z" level=info msg="StopPodSandbox for \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\"" Oct 2 18:54:58.198164 env[1730]: time="2023-10-02T18:54:58.198027393Z" level=info msg="TearDown network for sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" successfully" Oct 2 18:54:58.198164 env[1730]: time="2023-10-02T18:54:58.198084897Z" level=info msg="StopPodSandbox for \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" returns successfully" Oct 2 18:54:58.200774 kubelet[2198]: I1002 18:54:58.200740 2198 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8841f515-58a8-4e3a-8730-62a6b2acec2c path="/var/lib/kubelet/pods/8841f515-58a8-4e3a-8730-62a6b2acec2c/volumes" Oct 2 18:54:58.209998 systemd[1]: Removed slice kubepods-burstable-podecc2dca5_965b_4e8e_928e_51ef9a8dd153.slice. Oct 2 18:54:58.678576 kubelet[2198]: I1002 18:54:58.678543 2198 scope.go:115] "RemoveContainer" containerID="2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32" Oct 2 18:54:58.683330 env[1730]: time="2023-10-02T18:54:58.683259281Z" level=info msg="RemoveContainer for \"2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32\"" Oct 2 18:54:58.688411 env[1730]: time="2023-10-02T18:54:58.688338318Z" level=info msg="RemoveContainer for \"2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32\" returns successfully" Oct 2 18:54:58.886256 kubelet[2198]: E1002 18:54:58.886186 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:54:59.886567 kubelet[2198]: E1002 18:54:59.886512 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:00.200107 kubelet[2198]: I1002 18:55:00.200003 2198 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ecc2dca5-965b-4e8e-928e-51ef9a8dd153 path="/var/lib/kubelet/pods/ecc2dca5-965b-4e8e-928e-51ef9a8dd153/volumes" Oct 2 18:55:00.505734 kubelet[2198]: W1002 18:55:00.505579 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecc2dca5_965b_4e8e_928e_51ef9a8dd153.slice/cri-containerd-2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32.scope WatchSource:0}: container "2f93bdeaf1c13fdfd357089ec0ea2efdb3a8f0fe87984e0ff00b1c6671835d32" in namespace "k8s.io": not found Oct 2 18:55:00.887587 kubelet[2198]: E1002 18:55:00.887553 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:01.103693 kubelet[2198]: I1002 18:55:01.103647 2198 topology_manager.go:205] "Topology Admit Handler" Oct 2 18:55:01.103929 kubelet[2198]: E1002 18:55:01.103904 2198 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="ecc2dca5-965b-4e8e-928e-51ef9a8dd153" containerName="mount-cgroup" Oct 2 18:55:01.104080 kubelet[2198]: I1002 18:55:01.104058 2198 memory_manager.go:345] "RemoveStaleState removing state" podUID="ecc2dca5-965b-4e8e-928e-51ef9a8dd153" containerName="mount-cgroup" Oct 2 18:55:01.105607 kubelet[2198]: I1002 18:55:01.105548 2198 topology_manager.go:205] "Topology Admit Handler" Oct 2 18:55:01.114173 systemd[1]: Created slice kubepods-besteffort-podcf9a0072_e330_4e5b_870e_92c6c3e72ea8.slice. Oct 2 18:55:01.124170 systemd[1]: Created slice kubepods-burstable-pod41d6a429_b7aa_4bd1_ab15_098902c0e9dd.slice. Oct 2 18:55:01.237217 kubelet[2198]: I1002 18:55:01.237053 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-host-proc-sys-kernel\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.237791 kubelet[2198]: I1002 18:55:01.237745 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-hubble-tls\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.237921 kubelet[2198]: I1002 18:55:01.237863 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-cgroup\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.237997 kubelet[2198]: I1002 18:55:01.237935 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-etc-cni-netd\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238062 kubelet[2198]: I1002 18:55:01.238006 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-clustermesh-secrets\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238126 kubelet[2198]: I1002 18:55:01.238059 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf9a0072-e330-4e5b-870e-92c6c3e72ea8-cilium-config-path\") pod \"cilium-operator-69b677f97c-lfrmj\" (UID: \"cf9a0072-e330-4e5b-870e-92c6c3e72ea8\") " pod="kube-system/cilium-operator-69b677f97c-lfrmj" Oct 2 18:55:01.238226 kubelet[2198]: I1002 18:55:01.238130 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2qd5\" (UniqueName: \"kubernetes.io/projected/cf9a0072-e330-4e5b-870e-92c6c3e72ea8-kube-api-access-g2qd5\") pod \"cilium-operator-69b677f97c-lfrmj\" (UID: \"cf9a0072-e330-4e5b-870e-92c6c3e72ea8\") " pod="kube-system/cilium-operator-69b677f97c-lfrmj" Oct 2 18:55:01.238318 kubelet[2198]: I1002 18:55:01.238231 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-xtables-lock\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238318 kubelet[2198]: I1002 18:55:01.238277 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-lib-modules\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238445 kubelet[2198]: I1002 18:55:01.238349 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-config-path\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238445 kubelet[2198]: I1002 18:55:01.238418 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzsg\" (UniqueName: \"kubernetes.io/projected/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-kube-api-access-4bzsg\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238564 kubelet[2198]: I1002 18:55:01.238489 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-run\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238638 kubelet[2198]: I1002 18:55:01.238578 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-hostproc\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238701 kubelet[2198]: I1002 18:55:01.238628 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cni-path\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238701 kubelet[2198]: I1002 18:55:01.238698 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-bpf-maps\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238819 kubelet[2198]: I1002 18:55:01.238768 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-ipsec-secrets\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.238882 kubelet[2198]: I1002 18:55:01.238841 2198 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-host-proc-sys-net\") pod \"cilium-4gtlx\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " pod="kube-system/cilium-4gtlx" Oct 2 18:55:01.423389 env[1730]: time="2023-10-02T18:55:01.423330308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-lfrmj,Uid:cf9a0072-e330-4e5b-870e-92c6c3e72ea8,Namespace:kube-system,Attempt:0,}" Oct 2 18:55:01.433229 env[1730]: time="2023-10-02T18:55:01.433023254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gtlx,Uid:41d6a429-b7aa-4bd1-ab15-098902c0e9dd,Namespace:kube-system,Attempt:0,}" Oct 2 18:55:01.474933 env[1730]: time="2023-10-02T18:55:01.474810775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 18:55:01.475156 env[1730]: time="2023-10-02T18:55:01.474888307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 18:55:01.475156 env[1730]: time="2023-10-02T18:55:01.474915391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 18:55:01.475623 env[1730]: time="2023-10-02T18:55:01.475501782Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c pid=3117 runtime=io.containerd.runc.v2 Oct 2 18:55:01.485007 env[1730]: time="2023-10-02T18:55:01.484877400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 18:55:01.485245 env[1730]: time="2023-10-02T18:55:01.484964256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 18:55:01.485245 env[1730]: time="2023-10-02T18:55:01.484991400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 18:55:01.485604 env[1730]: time="2023-10-02T18:55:01.485505911Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8 pid=3128 runtime=io.containerd.runc.v2 Oct 2 18:55:01.517775 systemd[1]: Started cri-containerd-63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c.scope. Oct 2 18:55:01.542473 systemd[1]: Started cri-containerd-f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8.scope. Oct 2 18:55:01.583336 kernel: kauditd_printk_skb: 51 callbacks suppressed Oct 2 18:55:01.583485 kernel: audit: type=1400 audit(1696272901.571:747): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.571000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.571000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.593477 kernel: audit: type=1400 audit(1696272901.571:748): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.593618 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 18:55:01.572000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.603098 kernel: audit: type=1400 audit(1696272901.572:749): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.603179 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 18:55:01.572000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.613142 kernel: audit: type=1400 audit(1696272901.572:750): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.613239 kernel: audit: backlog limit exceeded Oct 2 18:55:01.572000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.622204 kernel: audit: type=1400 audit(1696272901.572:751): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.622308 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 18:55:01.626869 kernel: audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 18:55:01.572000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.572000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.572000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.572000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.572000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.572000 audit: BPF prog-id=88 op=LOAD Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400011db38 a2=10 a3=0 items=0 ppid=3117 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:55:01.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633643733323061323165396133663836373530306230396464346630 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400011d5a0 a2=3c a3=0 items=0 ppid=3117 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:55:01.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633643733323061323165396133663836373530306230396464346630 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.583000 audit: BPF prog-id=89 op=LOAD Oct 2 18:55:01.583000 audit[3142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011d8e0 a2=78 a3=0 items=0 ppid=3117 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:55:01.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633643733323061323165396133663836373530306230396464346630 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit: BPF prog-id=90 op=LOAD Oct 2 18:55:01.584000 audit[3142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400011d670 a2=78 a3=0 items=0 ppid=3117 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:55:01.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633643733323061323165396133663836373530306230396464346630 Oct 2 18:55:01.584000 audit: BPF prog-id=90 op=UNLOAD Oct 2 18:55:01.584000 audit: BPF prog-id=89 op=UNLOAD Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { perfmon } for pid=3142 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[3142]: AVC avc: denied { bpf } for pid=3142 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit: BPF prog-id=91 op=LOAD Oct 2 18:55:01.584000 audit[3142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011db40 a2=78 a3=0 items=0 ppid=3117 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:55:01.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633643733323061323165396133663836373530306230396464346630 Oct 2 18:55:01.584000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.584000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.614000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.614000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.614000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:01.672111 env[1730]: time="2023-10-02T18:55:01.672047506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gtlx,Uid:41d6a429-b7aa-4bd1-ab15-098902c0e9dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\"" Oct 2 18:55:01.677921 env[1730]: time="2023-10-02T18:55:01.677866703Z" level=info msg="CreateContainer within sandbox \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 18:55:01.690415 env[1730]: time="2023-10-02T18:55:01.690356268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-lfrmj,Uid:cf9a0072-e330-4e5b-870e-92c6c3e72ea8,Namespace:kube-system,Attempt:0,} returns sandbox id \"63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c\"" Oct 2 18:55:01.693904 env[1730]: time="2023-10-02T18:55:01.693850277Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 18:55:01.709815 env[1730]: time="2023-10-02T18:55:01.709754759Z" level=info msg="CreateContainer within sandbox \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb\"" Oct 2 18:55:01.711213 env[1730]: time="2023-10-02T18:55:01.711145245Z" level=info msg="StartContainer for \"9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb\"" Oct 2 18:55:01.752757 systemd[1]: Started cri-containerd-9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb.scope. Oct 2 18:55:01.796956 systemd[1]: cri-containerd-9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb.scope: Deactivated successfully. Oct 2 18:55:01.823331 env[1730]: time="2023-10-02T18:55:01.823263963Z" level=info msg="shim disconnected" id=9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb Oct 2 18:55:01.823742 env[1730]: time="2023-10-02T18:55:01.823708166Z" level=warning msg="cleaning up after shim disconnected" id=9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb namespace=k8s.io Oct 2 18:55:01.823868 env[1730]: time="2023-10-02T18:55:01.823840201Z" level=info msg="cleaning up dead shim" Oct 2 18:55:01.853011 env[1730]: time="2023-10-02T18:55:01.852926995Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:55:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3218 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:55:01Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:55:01.853504 env[1730]: time="2023-10-02T18:55:01.853408302Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 18:55:01.853846 env[1730]: time="2023-10-02T18:55:01.853777889Z" level=error msg="Failed to pipe stdout of container \"9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb\"" error="reading from a closed fifo" Oct 2 18:55:01.854346 env[1730]: time="2023-10-02T18:55:01.854290936Z" level=error msg="Failed to pipe stderr of container \"9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb\"" error="reading from a closed fifo" Oct 2 18:55:01.860433 env[1730]: time="2023-10-02T18:55:01.860359025Z" level=error msg="StartContainer for \"9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:55:01.861275 kubelet[2198]: E1002 18:55:01.860727 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb" Oct 2 18:55:01.861275 kubelet[2198]: E1002 18:55:01.861136 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:55:01.861275 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:55:01.861275 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:55:01.861637 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4bzsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:55:01.861805 kubelet[2198]: E1002 18:55:01.861232 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4gtlx" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd Oct 2 18:55:01.888965 kubelet[2198]: E1002 18:55:01.888914 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:02.696379 env[1730]: time="2023-10-02T18:55:02.696324784Z" level=info msg="CreateContainer within sandbox \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 18:55:02.724262 env[1730]: time="2023-10-02T18:55:02.724165767Z" level=info msg="CreateContainer within sandbox \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\"" Oct 2 18:55:02.725621 env[1730]: time="2023-10-02T18:55:02.725553900Z" level=info msg="StartContainer for \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\"" Oct 2 18:55:02.773066 systemd[1]: Started cri-containerd-0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805.scope. Oct 2 18:55:02.783147 systemd[1]: run-containerd-runc-k8s.io-0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805-runc.I8IYhd.mount: Deactivated successfully. Oct 2 18:55:02.834040 systemd[1]: cri-containerd-0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805.scope: Deactivated successfully. Oct 2 18:55:02.890165 kubelet[2198]: E1002 18:55:02.890042 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:02.891303 env[1730]: time="2023-10-02T18:55:02.891228355Z" level=info msg="shim disconnected" id=0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805 Oct 2 18:55:02.891451 env[1730]: time="2023-10-02T18:55:02.891310386Z" level=warning msg="cleaning up after shim disconnected" id=0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805 namespace=k8s.io Oct 2 18:55:02.891451 env[1730]: time="2023-10-02T18:55:02.891333402Z" level=info msg="cleaning up dead shim" Oct 2 18:55:02.924051 env[1730]: time="2023-10-02T18:55:02.923967944Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:55:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3257 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:55:02Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:55:02.925130 env[1730]: time="2023-10-02T18:55:02.924496640Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 18:55:02.926013 env[1730]: time="2023-10-02T18:55:02.925943297Z" level=error msg="Failed to pipe stderr of container \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\"" error="reading from a closed fifo" Oct 2 18:55:02.926124 env[1730]: time="2023-10-02T18:55:02.926053865Z" level=error msg="Failed to pipe stdout of container \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\"" error="reading from a closed fifo" Oct 2 18:55:02.930380 env[1730]: time="2023-10-02T18:55:02.929531375Z" level=error msg="StartContainer for \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:55:02.931129 kubelet[2198]: E1002 18:55:02.930875 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805" Oct 2 18:55:02.931129 kubelet[2198]: E1002 18:55:02.931029 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:55:02.931129 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:55:02.931129 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:55:02.931507 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4bzsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:55:02.931681 kubelet[2198]: E1002 18:55:02.931087 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4gtlx" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd Oct 2 18:55:02.945542 kubelet[2198]: E1002 18:55:02.945501 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:03.357399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805-rootfs.mount: Deactivated successfully. Oct 2 18:55:03.701233 kubelet[2198]: I1002 18:55:03.700231 2198 scope.go:115] "RemoveContainer" containerID="9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb" Oct 2 18:55:03.701233 kubelet[2198]: I1002 18:55:03.700724 2198 scope.go:115] "RemoveContainer" containerID="9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb" Oct 2 18:55:03.703053 env[1730]: time="2023-10-02T18:55:03.702998629Z" level=info msg="RemoveContainer for \"9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb\"" Oct 2 18:55:03.707478 env[1730]: time="2023-10-02T18:55:03.707396610Z" level=info msg="RemoveContainer for \"9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb\" returns successfully" Oct 2 18:55:03.708240 env[1730]: time="2023-10-02T18:55:03.708168304Z" level=info msg="RemoveContainer for \"9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb\"" Oct 2 18:55:03.708394 env[1730]: time="2023-10-02T18:55:03.708261832Z" level=info msg="RemoveContainer for \"9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb\" returns successfully" Oct 2 18:55:03.709018 kubelet[2198]: E1002 18:55:03.708982 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd)\"" pod="kube-system/cilium-4gtlx" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd Oct 2 18:55:03.890992 kubelet[2198]: E1002 18:55:03.890919 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:03.896175 env[1730]: time="2023-10-02T18:55:03.896120510Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:55:03.898798 env[1730]: time="2023-10-02T18:55:03.898750005Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:55:03.901628 env[1730]: time="2023-10-02T18:55:03.901564853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 18:55:03.902817 env[1730]: time="2023-10-02T18:55:03.902754219Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f\"" Oct 2 18:55:03.906885 env[1730]: time="2023-10-02T18:55:03.906810476Z" level=info msg="CreateContainer within sandbox \"63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 18:55:03.924489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount501374262.mount: Deactivated successfully. Oct 2 18:55:03.935351 env[1730]: time="2023-10-02T18:55:03.935265512Z" level=info msg="CreateContainer within sandbox \"63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\"" Oct 2 18:55:03.936759 env[1730]: time="2023-10-02T18:55:03.936658698Z" level=info msg="StartContainer for \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\"" Oct 2 18:55:03.979713 systemd[1]: Started cri-containerd-66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f.scope. Oct 2 18:55:04.020000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.020000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.020000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.020000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.021000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.021000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.021000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.021000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.021000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.021000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.021000 audit: BPF prog-id=96 op=LOAD Oct 2 18:55:04.022000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.022000 audit[3280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=3117 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:55:04.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636663836623238393530323063386131326334373231366634613232 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit[3280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=3117 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:55:04.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636663836623238393530323063386131326334373231366634613232 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.023000 audit: BPF prog-id=97 op=LOAD Oct 2 18:55:04.023000 audit[3280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=3117 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:55:04.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636663836623238393530323063386131326334373231366634613232 Oct 2 18:55:04.024000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.024000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.024000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.024000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.024000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.024000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.024000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.024000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.024000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.024000 audit: BPF prog-id=98 op=LOAD Oct 2 18:55:04.024000 audit[3280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=3117 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:55:04.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636663836623238393530323063386131326334373231366634613232 Oct 2 18:55:04.026000 audit: BPF prog-id=98 op=UNLOAD Oct 2 18:55:04.026000 audit: BPF prog-id=97 op=UNLOAD Oct 2 18:55:04.026000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.026000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.026000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.026000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.026000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.026000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.026000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.026000 audit[3280]: AVC avc: denied { perfmon } for pid=3280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.026000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.026000 audit[3280]: AVC avc: denied { bpf } for pid=3280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 18:55:04.026000 audit: BPF prog-id=99 op=LOAD Oct 2 18:55:04.026000 audit[3280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=3117 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 18:55:04.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636663836623238393530323063386131326334373231366634613232 Oct 2 18:55:04.060423 env[1730]: time="2023-10-02T18:55:04.060362669Z" level=info msg="StartContainer for \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\" returns successfully" Oct 2 18:55:04.132000 audit[3291]: AVC avc: denied { map_create } for pid=3291 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c470,c892 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c470,c892 tclass=bpf permissive=0 Oct 2 18:55:04.132000 audit[3291]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=400068f768 a2=48 a3=0 items=0 ppid=3117 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c470,c892 key=(null) Oct 2 18:55:04.132000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 18:55:04.705225 kubelet[2198]: E1002 18:55:04.705163 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd)\"" pod="kube-system/cilium-4gtlx" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd Oct 2 18:55:04.891575 kubelet[2198]: E1002 18:55:04.891518 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:04.928579 kubelet[2198]: W1002 18:55:04.928533 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41d6a429_b7aa_4bd1_ab15_098902c0e9dd.slice/cri-containerd-9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb.scope WatchSource:0}: container "9ee4399657b12a77ccc8c0d69b1838f0a6979b382842bff0e7869bfb9f7d3bdb" in namespace "k8s.io": not found Oct 2 18:55:05.892415 kubelet[2198]: E1002 18:55:05.892359 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:06.893313 kubelet[2198]: E1002 18:55:06.893277 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:07.705023 kubelet[2198]: E1002 18:55:07.704963 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:07.894039 kubelet[2198]: E1002 18:55:07.893985 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:07.946250 kubelet[2198]: E1002 18:55:07.946207 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:08.036485 kubelet[2198]: W1002 18:55:08.035964 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41d6a429_b7aa_4bd1_ab15_098902c0e9dd.slice/cri-containerd-0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805.scope WatchSource:0}: task 0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805 not found: not found Oct 2 18:55:08.894988 kubelet[2198]: E1002 18:55:08.894921 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:09.895784 kubelet[2198]: E1002 18:55:09.895748 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:10.897218 kubelet[2198]: E1002 18:55:10.897135 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:11.897290 kubelet[2198]: E1002 18:55:11.897255 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:12.898409 kubelet[2198]: E1002 18:55:12.898342 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:12.947335 kubelet[2198]: E1002 18:55:12.947294 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:13.898646 kubelet[2198]: E1002 18:55:13.898582 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:14.899215 kubelet[2198]: E1002 18:55:14.899149 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:15.899329 kubelet[2198]: E1002 18:55:15.899266 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:16.201393 env[1730]: time="2023-10-02T18:55:16.200907510Z" level=info msg="CreateContainer within sandbox \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 18:55:16.220710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1793793742.mount: Deactivated successfully. Oct 2 18:55:16.230579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506250405.mount: Deactivated successfully. Oct 2 18:55:16.237117 env[1730]: time="2023-10-02T18:55:16.237057287Z" level=info msg="CreateContainer within sandbox \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\"" Oct 2 18:55:16.238483 env[1730]: time="2023-10-02T18:55:16.238437502Z" level=info msg="StartContainer for \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\"" Oct 2 18:55:16.286810 systemd[1]: Started cri-containerd-7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c.scope. Oct 2 18:55:16.321094 systemd[1]: cri-containerd-7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c.scope: Deactivated successfully. Oct 2 18:55:16.566814 env[1730]: time="2023-10-02T18:55:16.566729600Z" level=info msg="shim disconnected" id=7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c Oct 2 18:55:16.566814 env[1730]: time="2023-10-02T18:55:16.566804672Z" level=warning msg="cleaning up after shim disconnected" id=7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c namespace=k8s.io Oct 2 18:55:16.567184 env[1730]: time="2023-10-02T18:55:16.566828204Z" level=info msg="cleaning up dead shim" Oct 2 18:55:16.593303 env[1730]: time="2023-10-02T18:55:16.593218710Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:55:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3335 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:55:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:55:16.593760 env[1730]: time="2023-10-02T18:55:16.593674278Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 18:55:16.596409 env[1730]: time="2023-10-02T18:55:16.596349148Z" level=error msg="Failed to pipe stdout of container \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\"" error="reading from a closed fifo" Oct 2 18:55:16.596951 env[1730]: time="2023-10-02T18:55:16.596545648Z" level=error msg="Failed to pipe stderr of container \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\"" error="reading from a closed fifo" Oct 2 18:55:16.599025 env[1730]: time="2023-10-02T18:55:16.598938807Z" level=error msg="StartContainer for \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:55:16.599692 kubelet[2198]: E1002 18:55:16.599392 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c" Oct 2 18:55:16.599692 kubelet[2198]: E1002 18:55:16.599552 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:55:16.599692 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:55:16.599692 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:55:16.600163 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4bzsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:55:16.600349 kubelet[2198]: E1002 18:55:16.599646 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4gtlx" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd Oct 2 18:55:16.734321 kubelet[2198]: I1002 18:55:16.734276 2198 scope.go:115] "RemoveContainer" containerID="0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805" Oct 2 18:55:16.735178 kubelet[2198]: I1002 18:55:16.735151 2198 scope.go:115] "RemoveContainer" containerID="0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805" Oct 2 18:55:16.737410 env[1730]: time="2023-10-02T18:55:16.737353875Z" level=info msg="RemoveContainer for \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\"" Oct 2 18:55:16.739290 env[1730]: time="2023-10-02T18:55:16.739235762Z" level=info msg="RemoveContainer for \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\"" Oct 2 18:55:16.739599 env[1730]: time="2023-10-02T18:55:16.739501346Z" level=error msg="RemoveContainer for \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\" failed" error="failed to set removing state for container \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\": container is already in removing state" Oct 2 18:55:16.740100 kubelet[2198]: E1002 18:55:16.740036 2198 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\": container is already in removing state" containerID="0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805" Oct 2 18:55:16.740100 kubelet[2198]: E1002 18:55:16.740094 2198 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805": container is already in removing state; Skipping pod "cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd)" Oct 2 18:55:16.740575 kubelet[2198]: E1002 18:55:16.740536 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd)\"" pod="kube-system/cilium-4gtlx" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd Oct 2 18:55:16.744838 env[1730]: time="2023-10-02T18:55:16.744760199Z" level=info msg="RemoveContainer for \"0d749ad12dce3a5c6f7b5b95fe2a8ce124174a5458a26b1153658890a56df805\" returns successfully" Oct 2 18:55:16.900265 kubelet[2198]: E1002 18:55:16.899425 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:17.216336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c-rootfs.mount: Deactivated successfully. Oct 2 18:55:17.901748 kubelet[2198]: E1002 18:55:17.901685 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:17.948149 kubelet[2198]: E1002 18:55:17.948098 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:18.902600 kubelet[2198]: E1002 18:55:18.902566 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:19.673260 kubelet[2198]: W1002 18:55:19.673169 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41d6a429_b7aa_4bd1_ab15_098902c0e9dd.slice/cri-containerd-7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c.scope WatchSource:0}: task 7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c not found: not found Oct 2 18:55:19.903614 kubelet[2198]: E1002 18:55:19.903576 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:20.904669 kubelet[2198]: E1002 18:55:20.904633 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:21.905915 kubelet[2198]: E1002 18:55:21.905878 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:22.907221 kubelet[2198]: E1002 18:55:22.907168 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:22.949275 kubelet[2198]: E1002 18:55:22.949226 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:23.908905 kubelet[2198]: E1002 18:55:23.908870 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:24.909654 kubelet[2198]: E1002 18:55:24.909590 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:25.909803 kubelet[2198]: E1002 18:55:25.909739 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:26.910284 kubelet[2198]: E1002 18:55:26.910226 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:27.705166 kubelet[2198]: E1002 18:55:27.705130 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:27.760823 env[1730]: time="2023-10-02T18:55:27.760751155Z" level=info msg="StopPodSandbox for \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\"" Oct 2 18:55:27.761466 env[1730]: time="2023-10-02T18:55:27.760938559Z" level=info msg="TearDown network for sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" successfully" Oct 2 18:55:27.761466 env[1730]: time="2023-10-02T18:55:27.761020915Z" level=info msg="StopPodSandbox for \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" returns successfully" Oct 2 18:55:27.761880 env[1730]: time="2023-10-02T18:55:27.761834912Z" level=info msg="RemovePodSandbox for \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\"" Oct 2 18:55:27.762070 env[1730]: time="2023-10-02T18:55:27.762010088Z" level=info msg="Forcibly stopping sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\"" Oct 2 18:55:27.762310 env[1730]: time="2023-10-02T18:55:27.762274364Z" level=info msg="TearDown network for sandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" successfully" Oct 2 18:55:27.766935 env[1730]: time="2023-10-02T18:55:27.766842705Z" level=info msg="RemovePodSandbox \"3513a62f78ba2fb2d00a50e0997975011a41dc6f2befe15c4f743b7a704b6458\" returns successfully" Oct 2 18:55:27.767952 env[1730]: time="2023-10-02T18:55:27.767878425Z" level=info msg="StopPodSandbox for \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\"" Oct 2 18:55:27.768092 env[1730]: time="2023-10-02T18:55:27.768033777Z" level=info msg="TearDown network for sandbox \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\" successfully" Oct 2 18:55:27.768176 env[1730]: time="2023-10-02T18:55:27.768093249Z" level=info msg="StopPodSandbox for \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\" returns successfully" Oct 2 18:55:27.768809 env[1730]: time="2023-10-02T18:55:27.768769030Z" level=info msg="RemovePodSandbox for \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\"" Oct 2 18:55:27.768993 env[1730]: time="2023-10-02T18:55:27.768938434Z" level=info msg="Forcibly stopping sandbox \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\"" Oct 2 18:55:27.769235 env[1730]: time="2023-10-02T18:55:27.769165030Z" level=info msg="TearDown network for sandbox \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\" successfully" Oct 2 18:55:27.773046 env[1730]: time="2023-10-02T18:55:27.772996763Z" level=info msg="RemovePodSandbox \"f46431fd1f5ea95e721f4aef095f03064b8062c3639750589cf3b246c48709ab\" returns successfully" Oct 2 18:55:27.910847 kubelet[2198]: E1002 18:55:27.910786 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:27.950530 kubelet[2198]: E1002 18:55:27.950480 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:28.910989 kubelet[2198]: E1002 18:55:28.910933 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:29.911435 kubelet[2198]: E1002 18:55:29.911379 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:30.912265 kubelet[2198]: E1002 18:55:30.912212 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:31.197054 kubelet[2198]: E1002 18:55:31.196920 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd)\"" pod="kube-system/cilium-4gtlx" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd Oct 2 18:55:31.912971 kubelet[2198]: E1002 18:55:31.912911 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:32.913958 kubelet[2198]: E1002 18:55:32.913921 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:32.952064 kubelet[2198]: E1002 18:55:32.951988 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:33.915504 kubelet[2198]: E1002 18:55:33.915472 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:34.916742 kubelet[2198]: E1002 18:55:34.916678 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:35.917161 kubelet[2198]: E1002 18:55:35.917127 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:36.918573 kubelet[2198]: E1002 18:55:36.918534 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:37.920074 kubelet[2198]: E1002 18:55:37.919832 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:37.953049 kubelet[2198]: E1002 18:55:37.953018 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:38.919985 kubelet[2198]: E1002 18:55:38.919950 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:39.921151 kubelet[2198]: E1002 18:55:39.921095 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:40.921341 kubelet[2198]: E1002 18:55:40.921279 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:41.921479 kubelet[2198]: E1002 18:55:41.921425 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:42.200920 env[1730]: time="2023-10-02T18:55:42.200510592Z" level=info msg="CreateContainer within sandbox \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 18:55:42.217883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584058220.mount: Deactivated successfully. Oct 2 18:55:42.229311 env[1730]: time="2023-10-02T18:55:42.229228582Z" level=info msg="CreateContainer within sandbox \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e\"" Oct 2 18:55:42.230274 env[1730]: time="2023-10-02T18:55:42.230126099Z" level=info msg="StartContainer for \"ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e\"" Oct 2 18:55:42.296969 systemd[1]: Started cri-containerd-ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e.scope. Oct 2 18:55:42.330454 systemd[1]: cri-containerd-ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e.scope: Deactivated successfully. Oct 2 18:55:42.351930 env[1730]: time="2023-10-02T18:55:42.351833076Z" level=info msg="shim disconnected" id=ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e Oct 2 18:55:42.352260 env[1730]: time="2023-10-02T18:55:42.351934812Z" level=warning msg="cleaning up after shim disconnected" id=ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e namespace=k8s.io Oct 2 18:55:42.352260 env[1730]: time="2023-10-02T18:55:42.351957936Z" level=info msg="cleaning up dead shim" Oct 2 18:55:42.377212 env[1730]: time="2023-10-02T18:55:42.377110110Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:55:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3377 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T18:55:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 18:55:42.377696 env[1730]: time="2023-10-02T18:55:42.377607510Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 18:55:42.379185 env[1730]: time="2023-10-02T18:55:42.379071932Z" level=error msg="Failed to pipe stdout of container \"ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e\"" error="reading from a closed fifo" Oct 2 18:55:42.379336 env[1730]: time="2023-10-02T18:55:42.379153916Z" level=error msg="Failed to pipe stderr of container \"ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e\"" error="reading from a closed fifo" Oct 2 18:55:42.382053 env[1730]: time="2023-10-02T18:55:42.381982932Z" level=error msg="StartContainer for \"ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 18:55:42.383148 kubelet[2198]: E1002 18:55:42.382549 2198 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e" Oct 2 18:55:42.383148 kubelet[2198]: E1002 18:55:42.382680 2198 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 18:55:42.383148 kubelet[2198]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 18:55:42.383148 kubelet[2198]: rm /hostbin/cilium-mount Oct 2 18:55:42.383572 kubelet[2198]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4bzsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 18:55:42.383691 kubelet[2198]: E1002 18:55:42.382741 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4gtlx" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd Oct 2 18:55:42.789240 kubelet[2198]: I1002 18:55:42.789138 2198 scope.go:115] "RemoveContainer" containerID="7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c" Oct 2 18:55:42.790135 kubelet[2198]: I1002 18:55:42.789882 2198 scope.go:115] "RemoveContainer" containerID="7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c" Oct 2 18:55:42.791988 env[1730]: time="2023-10-02T18:55:42.791899363Z" level=info msg="RemoveContainer for \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\"" Oct 2 18:55:42.793476 env[1730]: time="2023-10-02T18:55:42.793421600Z" level=info msg="RemoveContainer for \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\"" Oct 2 18:55:42.793649 env[1730]: time="2023-10-02T18:55:42.793559241Z" level=error msg="RemoveContainer for \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\" failed" error="failed to set removing state for container \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\": container is already in removing state" Oct 2 18:55:42.793906 kubelet[2198]: E1002 18:55:42.793877 2198 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\": container is already in removing state" containerID="7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c" Oct 2 18:55:42.794079 kubelet[2198]: E1002 18:55:42.794057 2198 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c": container is already in removing state; Skipping pod "cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd)" Oct 2 18:55:42.794689 kubelet[2198]: E1002 18:55:42.794658 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd)\"" pod="kube-system/cilium-4gtlx" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd Oct 2 18:55:42.798966 env[1730]: time="2023-10-02T18:55:42.798898527Z" level=info msg="RemoveContainer for \"7a1b9e736a11ceab957bd7353a7621b6301e3b70024a7da0f591bed05797530c\" returns successfully" Oct 2 18:55:42.921703 kubelet[2198]: E1002 18:55:42.921634 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:42.954873 kubelet[2198]: E1002 18:55:42.954816 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:43.214422 systemd[1]: run-containerd-runc-k8s.io-ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e-runc.2fuZ2r.mount: Deactivated successfully. Oct 2 18:55:43.215211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e-rootfs.mount: Deactivated successfully. Oct 2 18:55:43.922376 kubelet[2198]: E1002 18:55:43.922335 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:44.923398 kubelet[2198]: E1002 18:55:44.923332 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:45.457472 kubelet[2198]: W1002 18:55:45.457427 2198 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41d6a429_b7aa_4bd1_ab15_098902c0e9dd.slice/cri-containerd-ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e.scope WatchSource:0}: task ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e not found: not found Oct 2 18:55:45.923803 kubelet[2198]: E1002 18:55:45.923741 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:46.924175 kubelet[2198]: E1002 18:55:46.924130 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:47.704795 kubelet[2198]: E1002 18:55:47.704757 2198 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:47.925490 kubelet[2198]: E1002 18:55:47.925454 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:47.955933 kubelet[2198]: E1002 18:55:47.955819 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:48.926874 kubelet[2198]: E1002 18:55:48.926841 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:49.928501 kubelet[2198]: E1002 18:55:49.928437 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:50.928729 kubelet[2198]: E1002 18:55:50.928693 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:51.929533 kubelet[2198]: E1002 18:55:51.929471 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:52.930082 kubelet[2198]: E1002 18:55:52.930046 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:52.957024 kubelet[2198]: E1002 18:55:52.956983 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:53.931887 kubelet[2198]: E1002 18:55:53.931820 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:54.932804 kubelet[2198]: E1002 18:55:54.932752 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:55.933659 kubelet[2198]: E1002 18:55:55.933600 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:56.934564 kubelet[2198]: E1002 18:55:56.934530 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:57.935280 kubelet[2198]: E1002 18:55:57.935229 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:57.958165 kubelet[2198]: E1002 18:55:57.958122 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:55:58.197101 kubelet[2198]: E1002 18:55:58.196308 2198 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-4gtlx_kube-system(41d6a429-b7aa-4bd1-ab15-098902c0e9dd)\"" pod="kube-system/cilium-4gtlx" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd Oct 2 18:55:58.935621 kubelet[2198]: E1002 18:55:58.935572 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:55:59.937100 kubelet[2198]: E1002 18:55:59.937043 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:56:00.937448 kubelet[2198]: E1002 18:56:00.937412 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:56:01.939130 kubelet[2198]: E1002 18:56:01.939094 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:56:02.646212 env[1730]: time="2023-10-02T18:56:02.646101921Z" level=info msg="StopPodSandbox for \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\"" Oct 2 18:56:02.650015 env[1730]: time="2023-10-02T18:56:02.646252762Z" level=info msg="Container to stop \"ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 18:56:02.648780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8-shm.mount: Deactivated successfully. Oct 2 18:56:02.668142 systemd[1]: cri-containerd-f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8.scope: Deactivated successfully. Oct 2 18:56:02.667000 audit: BPF prog-id=92 op=UNLOAD Oct 2 18:56:02.671562 kernel: kauditd_printk_skb: 232 callbacks suppressed Oct 2 18:56:02.671646 kernel: audit: type=1334 audit(1696272962.667:792): prog-id=92 op=UNLOAD Oct 2 18:56:02.676000 audit: BPF prog-id=95 op=UNLOAD Oct 2 18:56:02.681265 kernel: audit: type=1334 audit(1696272962.676:793): prog-id=95 op=UNLOAD Oct 2 18:56:02.717708 env[1730]: time="2023-10-02T18:56:02.717629944Z" level=info msg="StopContainer for \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\" with timeout 30 (s)" Oct 2 18:56:02.718240 env[1730]: time="2023-10-02T18:56:02.718166225Z" level=info msg="Stop container \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\" with signal terminated" Oct 2 18:56:02.727385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8-rootfs.mount: Deactivated successfully. Oct 2 18:56:02.745537 env[1730]: time="2023-10-02T18:56:02.745458986Z" level=info msg="shim disconnected" id=f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8 Oct 2 18:56:02.745537 env[1730]: time="2023-10-02T18:56:02.745532150Z" level=warning msg="cleaning up after shim disconnected" id=f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8 namespace=k8s.io Oct 2 18:56:02.745858 env[1730]: time="2023-10-02T18:56:02.745554902Z" level=info msg="cleaning up dead shim" Oct 2 18:56:02.753000 audit: BPF prog-id=96 op=UNLOAD Oct 2 18:56:02.754124 systemd[1]: cri-containerd-66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f.scope: Deactivated successfully. Oct 2 18:56:02.758232 kernel: audit: type=1334 audit(1696272962.753:794): prog-id=96 op=UNLOAD Oct 2 18:56:02.758000 audit: BPF prog-id=99 op=UNLOAD Oct 2 18:56:02.763282 kernel: audit: type=1334 audit(1696272962.758:795): prog-id=99 op=UNLOAD Oct 2 18:56:02.788881 env[1730]: time="2023-10-02T18:56:02.788803977Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:56:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3416 runtime=io.containerd.runc.v2\n" Oct 2 18:56:02.789509 env[1730]: time="2023-10-02T18:56:02.789433450Z" level=info msg="TearDown network for sandbox \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" successfully" Oct 2 18:56:02.789509 env[1730]: time="2023-10-02T18:56:02.789495119Z" level=info msg="StopPodSandbox for \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" returns successfully" Oct 2 18:56:02.811502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f-rootfs.mount: Deactivated successfully. Oct 2 18:56:02.825401 env[1730]: time="2023-10-02T18:56:02.825337850Z" level=info msg="shim disconnected" id=66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f Oct 2 18:56:02.825720 env[1730]: time="2023-10-02T18:56:02.825686343Z" level=warning msg="cleaning up after shim disconnected" id=66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f namespace=k8s.io Oct 2 18:56:02.825845 env[1730]: time="2023-10-02T18:56:02.825817179Z" level=info msg="cleaning up dead shim" Oct 2 18:56:02.832369 kubelet[2198]: I1002 18:56:02.832318 2198 scope.go:115] "RemoveContainer" containerID="ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e" Oct 2 18:56:02.834717 env[1730]: time="2023-10-02T18:56:02.834645789Z" level=info msg="RemoveContainer for \"ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e\"" Oct 2 18:56:02.841035 env[1730]: time="2023-10-02T18:56:02.840955775Z" level=info msg="RemoveContainer for \"ef4c8a19a299a1f0844df3d4bb7e2808e867d155d4e59f0e5663022f0c9b979e\" returns successfully" Oct 2 18:56:02.856740 env[1730]: time="2023-10-02T18:56:02.856682168Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:56:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3442 runtime=io.containerd.runc.v2\n" Oct 2 18:56:02.859676 env[1730]: time="2023-10-02T18:56:02.859623782Z" level=info msg="StopContainer for \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\" returns successfully" Oct 2 18:56:02.860705 env[1730]: time="2023-10-02T18:56:02.860639800Z" level=info msg="StopPodSandbox for \"63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c\"" Oct 2 18:56:02.860950 env[1730]: time="2023-10-02T18:56:02.860730604Z" level=info msg="Container to stop \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 18:56:02.863358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c-shm.mount: Deactivated successfully. Oct 2 18:56:02.881000 audit: BPF prog-id=88 op=UNLOAD Oct 2 18:56:02.882694 systemd[1]: cri-containerd-63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c.scope: Deactivated successfully. Oct 2 18:56:02.887280 kernel: audit: type=1334 audit(1696272962.881:796): prog-id=88 op=UNLOAD Oct 2 18:56:02.893288 kernel: audit: type=1334 audit(1696272962.888:797): prog-id=91 op=UNLOAD Oct 2 18:56:02.888000 audit: BPF prog-id=91 op=UNLOAD Oct 2 18:56:02.893521 kubelet[2198]: I1002 18:56:02.889632 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-host-proc-sys-kernel\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893521 kubelet[2198]: I1002 18:56:02.889699 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-hubble-tls\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893521 kubelet[2198]: I1002 18:56:02.889741 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-host-proc-sys-net\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893521 kubelet[2198]: I1002 18:56:02.889781 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-lib-modules\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893521 kubelet[2198]: I1002 18:56:02.889819 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-etc-cni-netd\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893521 kubelet[2198]: I1002 18:56:02.889863 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bzsg\" (UniqueName: \"kubernetes.io/projected/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-kube-api-access-4bzsg\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893915 kubelet[2198]: I1002 18:56:02.889899 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-bpf-maps\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893915 kubelet[2198]: I1002 18:56:02.889943 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-config-path\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893915 kubelet[2198]: I1002 18:56:02.889986 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-ipsec-secrets\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893915 kubelet[2198]: I1002 18:56:02.890023 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-xtables-lock\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893915 kubelet[2198]: I1002 18:56:02.890060 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-run\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.893915 kubelet[2198]: I1002 18:56:02.890101 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-hostproc\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.895433 kubelet[2198]: I1002 18:56:02.890142 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-cgroup\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.895433 kubelet[2198]: I1002 18:56:02.890292 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-clustermesh-secrets\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.895433 kubelet[2198]: I1002 18:56:02.890383 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cni-path\") pod \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\" (UID: \"41d6a429-b7aa-4bd1-ab15-098902c0e9dd\") " Oct 2 18:56:02.895433 kubelet[2198]: I1002 18:56:02.890465 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cni-path" (OuterVolumeSpecName: "cni-path") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:56:02.895433 kubelet[2198]: I1002 18:56:02.890513 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:56:02.895433 kubelet[2198]: W1002 18:56:02.891229 2198 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/41d6a429-b7aa-4bd1-ab15-098902c0e9dd/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 18:56:02.895815 kubelet[2198]: I1002 18:56:02.892252 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:56:02.895815 kubelet[2198]: I1002 18:56:02.892332 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:56:02.895815 kubelet[2198]: I1002 18:56:02.892450 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:56:02.895815 kubelet[2198]: I1002 18:56:02.893360 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:56:02.895815 kubelet[2198]: I1002 18:56:02.893483 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-hostproc" (OuterVolumeSpecName: "hostproc") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:56:02.896108 kubelet[2198]: I1002 18:56:02.893550 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:56:02.896108 kubelet[2198]: I1002 18:56:02.893613 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:56:02.896108 kubelet[2198]: I1002 18:56:02.893665 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 18:56:02.902693 kubelet[2198]: I1002 18:56:02.899996 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 18:56:02.911785 systemd[1]: var-lib-kubelet-pods-41d6a429\x2db7aa\x2d4bd1\x2dab15\x2d098902c0e9dd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 18:56:02.919134 kubelet[2198]: I1002 18:56:02.919068 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 18:56:02.928031 kubelet[2198]: I1002 18:56:02.927916 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-kube-api-access-4bzsg" (OuterVolumeSpecName: "kube-api-access-4bzsg") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "kube-api-access-4bzsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 18:56:02.930373 kubelet[2198]: I1002 18:56:02.930262 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 18:56:02.931946 kubelet[2198]: I1002 18:56:02.931746 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "41d6a429-b7aa-4bd1-ab15-098902c0e9dd" (UID: "41d6a429-b7aa-4bd1-ab15-098902c0e9dd"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 18:56:02.941179 kubelet[2198]: E1002 18:56:02.941121 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:56:02.959911 kubelet[2198]: E1002 18:56:02.959809 2198 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 18:56:02.977684 env[1730]: time="2023-10-02T18:56:02.977619442Z" level=info msg="shim disconnected" id=63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c Oct 2 18:56:02.978170 env[1730]: time="2023-10-02T18:56:02.978131843Z" level=warning msg="cleaning up after shim disconnected" id=63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c namespace=k8s.io Oct 2 18:56:02.978351 env[1730]: time="2023-10-02T18:56:02.978321551Z" level=info msg="cleaning up dead shim" Oct 2 18:56:02.992020 kubelet[2198]: I1002 18:56:02.991607 2198 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-bpf-maps\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992020 kubelet[2198]: I1002 18:56:02.991661 2198 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-config-path\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992020 kubelet[2198]: I1002 18:56:02.991686 2198 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-etc-cni-netd\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992020 kubelet[2198]: I1002 18:56:02.991710 2198 reconciler.go:399] "Volume detached for volume \"kube-api-access-4bzsg\" (UniqueName: \"kubernetes.io/projected/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-kube-api-access-4bzsg\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992020 kubelet[2198]: I1002 18:56:02.991734 2198 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-hostproc\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992020 kubelet[2198]: I1002 18:56:02.991757 2198 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-ipsec-secrets\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992020 kubelet[2198]: I1002 18:56:02.991779 2198 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-xtables-lock\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992020 kubelet[2198]: I1002 18:56:02.991804 2198 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-run\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992726 kubelet[2198]: I1002 18:56:02.991828 2198 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cni-path\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992726 kubelet[2198]: I1002 18:56:02.991851 2198 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-cilium-cgroup\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992726 kubelet[2198]: I1002 18:56:02.991873 2198 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-clustermesh-secrets\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992726 kubelet[2198]: I1002 18:56:02.991898 2198 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-host-proc-sys-net\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992726 kubelet[2198]: I1002 18:56:02.991922 2198 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-lib-modules\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992726 kubelet[2198]: I1002 18:56:02.991946 2198 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-host-proc-sys-kernel\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:02.992726 kubelet[2198]: I1002 18:56:02.991969 2198 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41d6a429-b7aa-4bd1-ab15-098902c0e9dd-hubble-tls\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:03.003666 env[1730]: time="2023-10-02T18:56:03.003611712Z" level=warning msg="cleanup warnings time=\"2023-10-02T18:56:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3478 runtime=io.containerd.runc.v2\n" Oct 2 18:56:03.004413 env[1730]: time="2023-10-02T18:56:03.004369742Z" level=info msg="TearDown network for sandbox \"63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c\" successfully" Oct 2 18:56:03.004591 env[1730]: time="2023-10-02T18:56:03.004553570Z" level=info msg="StopPodSandbox for \"63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c\" returns successfully" Oct 2 18:56:03.092574 kubelet[2198]: I1002 18:56:03.092541 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf9a0072-e330-4e5b-870e-92c6c3e72ea8-cilium-config-path\") pod \"cf9a0072-e330-4e5b-870e-92c6c3e72ea8\" (UID: \"cf9a0072-e330-4e5b-870e-92c6c3e72ea8\") " Oct 2 18:56:03.092845 kubelet[2198]: I1002 18:56:03.092810 2198 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2qd5\" (UniqueName: \"kubernetes.io/projected/cf9a0072-e330-4e5b-870e-92c6c3e72ea8-kube-api-access-g2qd5\") pod \"cf9a0072-e330-4e5b-870e-92c6c3e72ea8\" (UID: \"cf9a0072-e330-4e5b-870e-92c6c3e72ea8\") " Oct 2 18:56:03.093518 kubelet[2198]: W1002 18:56:03.093458 2198 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/cf9a0072-e330-4e5b-870e-92c6c3e72ea8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 18:56:03.098479 kubelet[2198]: I1002 18:56:03.098397 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9a0072-e330-4e5b-870e-92c6c3e72ea8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf9a0072-e330-4e5b-870e-92c6c3e72ea8" (UID: "cf9a0072-e330-4e5b-870e-92c6c3e72ea8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 18:56:03.103326 kubelet[2198]: I1002 18:56:03.103160 2198 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf9a0072-e330-4e5b-870e-92c6c3e72ea8-kube-api-access-g2qd5" (OuterVolumeSpecName: "kube-api-access-g2qd5") pod "cf9a0072-e330-4e5b-870e-92c6c3e72ea8" (UID: "cf9a0072-e330-4e5b-870e-92c6c3e72ea8"). InnerVolumeSpecName "kube-api-access-g2qd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 18:56:03.140627 systemd[1]: Removed slice kubepods-burstable-pod41d6a429_b7aa_4bd1_ab15_098902c0e9dd.slice. Oct 2 18:56:03.193816 kubelet[2198]: I1002 18:56:03.193668 2198 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf9a0072-e330-4e5b-870e-92c6c3e72ea8-cilium-config-path\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:03.193816 kubelet[2198]: I1002 18:56:03.193725 2198 reconciler.go:399] "Volume detached for volume \"kube-api-access-g2qd5\" (UniqueName: \"kubernetes.io/projected/cf9a0072-e330-4e5b-870e-92c6c3e72ea8-kube-api-access-g2qd5\") on node \"172.31.28.169\" DevicePath \"\"" Oct 2 18:56:03.648581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c-rootfs.mount: Deactivated successfully. Oct 2 18:56:03.648749 systemd[1]: var-lib-kubelet-pods-cf9a0072\x2de330\x2d4e5b\x2d870e\x2d92c6c3e72ea8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg2qd5.mount: Deactivated successfully. Oct 2 18:56:03.648884 systemd[1]: var-lib-kubelet-pods-41d6a429\x2db7aa\x2d4bd1\x2dab15\x2d098902c0e9dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4bzsg.mount: Deactivated successfully. Oct 2 18:56:03.649013 systemd[1]: var-lib-kubelet-pods-41d6a429\x2db7aa\x2d4bd1\x2dab15\x2d098902c0e9dd-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 18:56:03.649150 systemd[1]: var-lib-kubelet-pods-41d6a429\x2db7aa\x2d4bd1\x2dab15\x2d098902c0e9dd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 18:56:03.837930 kubelet[2198]: I1002 18:56:03.837878 2198 scope.go:115] "RemoveContainer" containerID="66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f" Oct 2 18:56:03.845670 systemd[1]: Removed slice kubepods-besteffort-podcf9a0072_e330_4e5b_870e_92c6c3e72ea8.slice. Oct 2 18:56:03.852436 env[1730]: time="2023-10-02T18:56:03.852382719Z" level=info msg="RemoveContainer for \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\"" Oct 2 18:56:03.861422 env[1730]: time="2023-10-02T18:56:03.861365435Z" level=info msg="RemoveContainer for \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\" returns successfully" Oct 2 18:56:03.861963 kubelet[2198]: I1002 18:56:03.861839 2198 scope.go:115] "RemoveContainer" containerID="66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f" Oct 2 18:56:03.862331 env[1730]: time="2023-10-02T18:56:03.862167660Z" level=error msg="ContainerStatus for \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\": not found" Oct 2 18:56:03.862687 kubelet[2198]: E1002 18:56:03.862601 2198 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\": not found" containerID="66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f" Oct 2 18:56:03.862687 kubelet[2198]: I1002 18:56:03.862656 2198 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f} err="failed to get container status \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\": rpc error: code = NotFound desc = an error occurred when try to find container \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\": not found" Oct 2 18:56:03.942795 kubelet[2198]: E1002 18:56:03.942643 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 18:56:04.198599 env[1730]: time="2023-10-02T18:56:04.198449871Z" level=info msg="StopPodSandbox for \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\"" Oct 2 18:56:04.198932 env[1730]: time="2023-10-02T18:56:04.198798507Z" level=info msg="StopContainer for \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\" with timeout 1 (s)" Oct 2 18:56:04.199064 env[1730]: time="2023-10-02T18:56:04.198883192Z" level=error msg="StopContainer for \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\": not found" Oct 2 18:56:04.199635 env[1730]: time="2023-10-02T18:56:04.199557221Z" level=info msg="TearDown network for sandbox \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" successfully" Oct 2 18:56:04.199849 kubelet[2198]: E1002 18:56:04.199438 2198 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f\": not found" containerID="66f86b2895020c8a12c47216f4a2242ea5bf5d131c8d0dba2b5701163fe7f68f" Oct 2 18:56:04.200531 env[1730]: time="2023-10-02T18:56:04.200448571Z" level=info msg="StopPodSandbox for \"f13333cc929063a5633bbe38494e110c80254980003a451e5070be80a562e1e8\" returns successfully" Oct 2 18:56:04.200662 env[1730]: time="2023-10-02T18:56:04.200243815Z" level=info msg="StopPodSandbox for \"63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c\"" Oct 2 18:56:04.200825 env[1730]: time="2023-10-02T18:56:04.200725160Z" level=info msg="TearDown network for sandbox \"63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c\" successfully" Oct 2 18:56:04.200918 env[1730]: time="2023-10-02T18:56:04.200838620Z" level=info msg="StopPodSandbox for \"63d7320a21e9a3f867500b09dd4f0f02fb9d3093089ffc74ca82d60b5d5bec8c\" returns successfully" Oct 2 18:56:04.201089 kubelet[2198]: I1002 18:56:04.201060 2198 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=41d6a429-b7aa-4bd1-ab15-098902c0e9dd path="/var/lib/kubelet/pods/41d6a429-b7aa-4bd1-ab15-098902c0e9dd/volumes" Oct 2 18:56:04.202292 kubelet[2198]: I1002 18:56:04.202259 2198 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cf9a0072-e330-4e5b-870e-92c6c3e72ea8 path="/var/lib/kubelet/pods/cf9a0072-e330-4e5b-870e-92c6c3e72ea8/volumes" Oct 2 18:56:04.943500 kubelet[2198]: E1002 18:56:04.943445 2198 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"