Oct 2 19:38:40.203040 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 2 19:38:40.203090 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:38:40.203116 kernel: efi: EFI v2.70 by EDK II Oct 2 19:38:40.203131 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71accf98 Oct 2 19:38:40.203145 kernel: ACPI: Early table checksum verification disabled Oct 2 19:38:40.203159 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 2 19:38:40.203175 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 2 19:38:40.203189 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 19:38:40.203203 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 2 19:38:40.203217 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 19:38:40.203236 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 2 19:38:40.203249 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 2 19:38:40.203263 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 2 19:38:40.203277 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 19:38:40.203294 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 2 19:38:40.203314 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 2 19:38:40.203328 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 2 19:38:40.203343 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 2 19:38:40.203357 kernel: printk: bootconsole [uart0] enabled Oct 2 19:38:40.203372 kernel: NUMA: Failed to initialise from firmware Oct 2 19:38:40.203386 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:38:40.203401 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Oct 2 19:38:40.203416 kernel: Zone ranges: Oct 2 19:38:40.203430 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 2 19:38:40.203445 kernel: DMA32 empty Oct 2 19:38:40.203459 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 2 19:38:40.203478 kernel: Movable zone start for each node Oct 2 19:38:40.203493 kernel: Early memory node ranges Oct 2 19:38:40.203507 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Oct 2 19:38:40.203521 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 2 19:38:40.203536 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 2 19:38:40.203550 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 2 19:38:40.203564 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 2 19:38:40.203579 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 2 19:38:40.203593 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:38:40.203608 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 2 19:38:40.203622 kernel: psci: probing for conduit method from ACPI. Oct 2 19:38:40.203637 kernel: psci: PSCIv1.0 detected in firmware. Oct 2 19:38:40.203656 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:38:40.203671 kernel: psci: Trusted OS migration not required Oct 2 19:38:40.203693 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:38:40.203708 kernel: ACPI: SRAT not present Oct 2 19:38:40.203725 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:38:40.203744 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:38:40.203760 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 2 19:38:40.203776 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:38:40.203791 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:38:40.203807 kernel: CPU features: detected: Spectre-v2 Oct 2 19:38:40.203822 kernel: CPU features: detected: Spectre-v3a Oct 2 19:38:40.203877 kernel: CPU features: detected: Spectre-BHB Oct 2 19:38:40.203897 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:38:40.203914 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:38:40.203929 kernel: CPU features: detected: ARM erratum 1742098 Oct 2 19:38:40.203945 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 2 19:38:40.203966 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 2 19:38:40.203982 kernel: Policy zone: Normal Oct 2 19:38:40.204000 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:38:40.204017 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:38:40.204032 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:38:40.204048 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:38:40.204063 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:38:40.204079 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 2 19:38:40.204096 kernel: Memory: 3826444K/4030464K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 204020K reserved, 0K cma-reserved) Oct 2 19:38:40.204112 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:38:40.204132 kernel: trace event string verifier disabled Oct 2 19:38:40.204148 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:38:40.204164 kernel: rcu: RCU event tracing is enabled. Oct 2 19:38:40.204180 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:38:40.204196 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:38:40.204212 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:38:40.204228 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:38:40.204243 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:38:40.204259 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:38:40.204274 kernel: GICv3: 96 SPIs implemented Oct 2 19:38:40.204290 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:38:40.204306 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:38:40.204327 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:38:40.204343 kernel: GICv3: 16 PPIs implemented Oct 2 19:38:40.204358 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 2 19:38:40.204373 kernel: ACPI: SRAT not present Oct 2 19:38:40.204388 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 2 19:38:40.204404 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:38:40.204420 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:38:40.204435 kernel: GICv3: using LPI property table @0x00000004000c0000 Oct 2 19:38:40.204451 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 2 19:38:40.204466 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Oct 2 19:38:40.204481 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 2 19:38:40.204502 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 2 19:38:40.204518 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 2 19:38:40.204534 kernel: Console: colour dummy device 80x25 Oct 2 19:38:40.204550 kernel: printk: console [tty1] enabled Oct 2 19:38:40.204565 kernel: ACPI: Core revision 20210730 Oct 2 19:38:40.204582 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 2 19:38:40.204598 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:38:40.204614 kernel: LSM: Security Framework initializing Oct 2 19:38:40.204630 kernel: SELinux: Initializing. Oct 2 19:38:40.204646 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:38:40.204667 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:38:40.204683 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:38:40.204699 kernel: Platform MSI: ITS@0x10080000 domain created Oct 2 19:38:40.204714 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 2 19:38:40.204730 kernel: Remapping and enabling EFI services. Oct 2 19:38:40.204746 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:38:40.204761 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:38:40.204777 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 2 19:38:40.204793 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Oct 2 19:38:40.204814 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 2 19:38:40.204830 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:38:40.204882 kernel: SMP: Total of 2 processors activated. Oct 2 19:38:40.204899 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:38:40.204915 kernel: CPU features: detected: 32-bit EL1 Support Oct 2 19:38:40.204931 kernel: CPU features: detected: CRC32 instructions Oct 2 19:38:40.204946 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:38:40.204962 kernel: alternatives: patching kernel code Oct 2 19:38:40.204977 kernel: devtmpfs: initialized Oct 2 19:38:40.205001 kernel: KASLR disabled due to lack of seed Oct 2 19:38:40.205018 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:38:40.205035 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:38:40.205061 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:38:40.205082 kernel: SMBIOS 3.0.0 present. Oct 2 19:38:40.205098 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 2 19:38:40.205114 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:38:40.205131 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:38:40.205172 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:38:40.205191 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:38:40.205208 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:38:40.205225 kernel: audit: type=2000 audit(0.254:1): state=initialized audit_enabled=0 res=1 Oct 2 19:38:40.205248 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:38:40.205265 kernel: cpuidle: using governor menu Oct 2 19:38:40.205282 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:38:40.205298 kernel: ASID allocator initialised with 32768 entries Oct 2 19:38:40.205315 kernel: ACPI: bus type PCI registered Oct 2 19:38:40.205337 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:38:40.205353 kernel: Serial: AMBA PL011 UART driver Oct 2 19:38:40.205370 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:38:40.205387 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:38:40.205403 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:38:40.205419 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:38:40.205435 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:38:40.205452 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:38:40.205468 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:38:40.205490 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:38:40.205506 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:38:40.205522 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:38:40.205539 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:38:40.205555 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:38:40.205571 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:38:40.205587 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:38:40.205604 kernel: ACPI: Interpreter enabled Oct 2 19:38:40.205620 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:38:40.205642 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:38:40.205659 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 2 19:38:40.206156 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:38:40.206398 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:38:40.206617 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:38:40.206829 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 2 19:38:40.207067 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 2 19:38:40.207101 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 2 19:38:40.207118 kernel: acpiphp: Slot [1] registered Oct 2 19:38:40.207134 kernel: acpiphp: Slot [2] registered Oct 2 19:38:40.207150 kernel: acpiphp: Slot [3] registered Oct 2 19:38:40.207166 kernel: acpiphp: Slot [4] registered Oct 2 19:38:40.207183 kernel: acpiphp: Slot [5] registered Oct 2 19:38:40.207198 kernel: acpiphp: Slot [6] registered Oct 2 19:38:40.207214 kernel: acpiphp: Slot [7] registered Oct 2 19:38:40.207230 kernel: acpiphp: Slot [8] registered Oct 2 19:38:40.207251 kernel: acpiphp: Slot [9] registered Oct 2 19:38:40.207268 kernel: acpiphp: Slot [10] registered Oct 2 19:38:40.207284 kernel: acpiphp: Slot [11] registered Oct 2 19:38:40.207300 kernel: acpiphp: Slot [12] registered Oct 2 19:38:40.207316 kernel: acpiphp: Slot [13] registered Oct 2 19:38:40.207332 kernel: acpiphp: Slot [14] registered Oct 2 19:38:40.207347 kernel: acpiphp: Slot [15] registered Oct 2 19:38:40.207364 kernel: acpiphp: Slot [16] registered Oct 2 19:38:40.207379 kernel: acpiphp: Slot [17] registered Oct 2 19:38:40.207395 kernel: acpiphp: Slot [18] registered Oct 2 19:38:40.207416 kernel: acpiphp: Slot [19] registered Oct 2 19:38:40.207432 kernel: acpiphp: Slot [20] registered Oct 2 19:38:40.207448 kernel: acpiphp: Slot [21] registered Oct 2 19:38:40.207464 kernel: acpiphp: Slot [22] registered Oct 2 19:38:40.207480 kernel: acpiphp: Slot [23] registered Oct 2 19:38:40.207496 kernel: acpiphp: Slot [24] registered Oct 2 19:38:40.207512 kernel: acpiphp: Slot [25] registered Oct 2 19:38:40.207528 kernel: acpiphp: Slot [26] registered Oct 2 19:38:40.207544 kernel: acpiphp: Slot [27] registered Oct 2 19:38:40.207564 kernel: acpiphp: Slot [28] registered Oct 2 19:38:40.207580 kernel: acpiphp: Slot [29] registered Oct 2 19:38:40.207597 kernel: acpiphp: Slot [30] registered Oct 2 19:38:40.207612 kernel: acpiphp: Slot [31] registered Oct 2 19:38:40.207628 kernel: PCI host bridge to bus 0000:00 Oct 2 19:38:40.224975 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 2 19:38:40.225275 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:38:40.225518 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 2 19:38:40.225742 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 2 19:38:40.226060 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 2 19:38:40.226312 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 2 19:38:40.226533 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 2 19:38:40.226775 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 19:38:40.227070 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 2 19:38:40.227326 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:38:40.227588 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 19:38:40.227809 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 2 19:38:40.232178 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 2 19:38:40.232480 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 2 19:38:40.232827 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:38:40.233234 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 2 19:38:40.233519 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 2 19:38:40.233772 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 2 19:38:40.234081 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 2 19:38:40.234352 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 2 19:38:40.234636 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 2 19:38:40.235015 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:38:40.235259 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 2 19:38:40.235306 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:38:40.235325 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:38:40.235342 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:38:40.235359 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:38:40.235376 kernel: iommu: Default domain type: Translated Oct 2 19:38:40.235393 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:38:40.235410 kernel: vgaarb: loaded Oct 2 19:38:40.235426 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:38:40.235444 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:38:40.235467 kernel: PTP clock support registered Oct 2 19:38:40.235484 kernel: Registered efivars operations Oct 2 19:38:40.235501 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:38:40.235518 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:38:40.235535 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:38:40.235552 kernel: pnp: PnP ACPI init Oct 2 19:38:40.235826 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 2 19:38:40.239950 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:38:40.239970 kernel: NET: Registered PF_INET protocol family Oct 2 19:38:40.239998 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:38:40.240016 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:38:40.240033 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:38:40.240049 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:38:40.240066 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:38:40.245484 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:38:40.245504 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:38:40.245522 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:38:40.245539 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:38:40.245565 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:38:40.245582 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 2 19:38:40.245598 kernel: kvm [1]: HYP mode not available Oct 2 19:38:40.245614 kernel: Initialise system trusted keyrings Oct 2 19:38:40.245631 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:38:40.245648 kernel: Key type asymmetric registered Oct 2 19:38:40.245664 kernel: Asymmetric key parser 'x509' registered Oct 2 19:38:40.245681 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:38:40.245697 kernel: io scheduler mq-deadline registered Oct 2 19:38:40.245718 kernel: io scheduler kyber registered Oct 2 19:38:40.245735 kernel: io scheduler bfq registered Oct 2 19:38:40.246045 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 2 19:38:40.246080 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:38:40.246097 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:38:40.246114 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:38:40.246132 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 2 19:38:40.246343 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 2 19:38:40.246377 kernel: printk: console [ttyS0] disabled Oct 2 19:38:40.246395 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 2 19:38:40.246412 kernel: printk: console [ttyS0] enabled Oct 2 19:38:40.246429 kernel: printk: bootconsole [uart0] disabled Oct 2 19:38:40.246446 kernel: thunder_xcv, ver 1.0 Oct 2 19:38:40.246463 kernel: thunder_bgx, ver 1.0 Oct 2 19:38:40.246479 kernel: nicpf, ver 1.0 Oct 2 19:38:40.246495 kernel: nicvf, ver 1.0 Oct 2 19:38:40.246729 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:38:40.246992 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:38:39 UTC (1696275519) Oct 2 19:38:40.247024 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:38:40.247041 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:38:40.247058 kernel: Segment Routing with IPv6 Oct 2 19:38:40.247074 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:38:40.247091 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:38:40.247107 kernel: Key type dns_resolver registered Oct 2 19:38:40.247124 kernel: registered taskstats version 1 Oct 2 19:38:40.247148 kernel: Loading compiled-in X.509 certificates Oct 2 19:38:40.247165 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:38:40.247181 kernel: Key type .fscrypt registered Oct 2 19:38:40.247198 kernel: Key type fscrypt-provisioning registered Oct 2 19:38:40.247214 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:38:40.247231 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:38:40.247247 kernel: ima: No architecture policies found Oct 2 19:38:40.247264 kernel: Freeing unused kernel memory: 34560K Oct 2 19:38:40.247281 kernel: Run /init as init process Oct 2 19:38:40.247302 kernel: with arguments: Oct 2 19:38:40.247319 kernel: /init Oct 2 19:38:40.247334 kernel: with environment: Oct 2 19:38:40.247350 kernel: HOME=/ Oct 2 19:38:40.247366 kernel: TERM=linux Oct 2 19:38:40.247382 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:38:40.247405 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:38:40.247427 systemd[1]: Detected virtualization amazon. Oct 2 19:38:40.247452 systemd[1]: Detected architecture arm64. Oct 2 19:38:40.247471 systemd[1]: Running in initrd. Oct 2 19:38:40.247488 systemd[1]: No hostname configured, using default hostname. Oct 2 19:38:40.247506 systemd[1]: Hostname set to . Oct 2 19:38:40.247526 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:38:40.247545 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:38:40.247563 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:38:40.247580 systemd[1]: Reached target cryptsetup.target. Oct 2 19:38:40.247604 systemd[1]: Reached target paths.target. Oct 2 19:38:40.247629 systemd[1]: Reached target slices.target. Oct 2 19:38:40.247672 systemd[1]: Reached target swap.target. Oct 2 19:38:40.247702 systemd[1]: Reached target timers.target. Oct 2 19:38:40.247750 systemd[1]: Listening on iscsid.socket. Oct 2 19:38:40.247772 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:38:40.247790 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:38:40.247808 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:38:40.253910 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:38:40.253959 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:38:40.253979 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:38:40.253997 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:38:40.254015 systemd[1]: Reached target sockets.target. Oct 2 19:38:40.254033 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:38:40.254051 systemd[1]: Finished network-cleanup.service. Oct 2 19:38:40.254069 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:38:40.254087 systemd[1]: Starting systemd-journald.service... Oct 2 19:38:40.254115 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:38:40.254133 systemd[1]: Starting systemd-resolved.service... Oct 2 19:38:40.254151 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:38:40.254169 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:38:40.254187 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:38:40.254205 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:38:40.254222 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:38:40.254241 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:38:40.254258 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:38:40.254281 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:38:40.254300 kernel: audit: type=1130 audit(1696275520.212:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.254318 kernel: Bridge firewalling registered Oct 2 19:38:40.254334 kernel: SCSI subsystem initialized Oct 2 19:38:40.254356 systemd-journald[308]: Journal started Oct 2 19:38:40.254464 systemd-journald[308]: Runtime Journal (/run/log/journal/ec2df4f863818626ab583a9d83df9869) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:38:40.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.155959 systemd-modules-load[309]: Inserted module 'overlay' Oct 2 19:38:40.259108 systemd[1]: Started systemd-journald.service. Oct 2 19:38:40.226464 systemd-modules-load[309]: Inserted module 'br_netfilter' Oct 2 19:38:40.275881 kernel: audit: type=1130 audit(1696275520.260:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.278968 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:38:40.307057 kernel: audit: type=1130 audit(1696275520.279:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.307100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:38:40.307125 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:38:40.307147 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:38:40.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.285759 systemd-resolved[310]: Positive Trust Anchors: Oct 2 19:38:40.285773 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:38:40.285826 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:38:40.290263 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:38:40.327388 systemd-modules-load[309]: Inserted module 'dm_multipath' Oct 2 19:38:40.331739 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:38:40.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.343350 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:38:40.346240 kernel: audit: type=1130 audit(1696275520.332:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.382536 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:38:40.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.393757 dracut-cmdline[326]: dracut-dracut-053 Oct 2 19:38:40.399865 kernel: audit: type=1130 audit(1696275520.383:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.405356 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:38:40.632863 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:38:40.643870 kernel: iscsi: registered transport (tcp) Oct 2 19:38:40.670414 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:38:40.670485 kernel: QLogic iSCSI HBA Driver Oct 2 19:38:40.841869 kernel: random: crng init done Oct 2 19:38:40.841969 systemd-resolved[310]: Defaulting to hostname 'linux'. Oct 2 19:38:40.845758 systemd[1]: Started systemd-resolved.service. Oct 2 19:38:40.849071 systemd[1]: Reached target nss-lookup.target. Oct 2 19:38:40.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.864862 kernel: audit: type=1130 audit(1696275520.847:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.913319 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:38:40.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:40.918012 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:38:40.927868 kernel: audit: type=1130 audit(1696275520.915:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:41.011879 kernel: raid6: neonx8 gen() 6440 MB/s Oct 2 19:38:41.029866 kernel: raid6: neonx8 xor() 4537 MB/s Oct 2 19:38:41.047890 kernel: raid6: neonx4 gen() 6621 MB/s Oct 2 19:38:41.065878 kernel: raid6: neonx4 xor() 4639 MB/s Oct 2 19:38:41.083867 kernel: raid6: neonx2 gen() 5824 MB/s Oct 2 19:38:41.101869 kernel: raid6: neonx2 xor() 4342 MB/s Oct 2 19:38:41.119865 kernel: raid6: neonx1 gen() 4504 MB/s Oct 2 19:38:41.137865 kernel: raid6: neonx1 xor() 3559 MB/s Oct 2 19:38:41.155866 kernel: raid6: int64x8 gen() 3432 MB/s Oct 2 19:38:41.173865 kernel: raid6: int64x8 xor() 2041 MB/s Oct 2 19:38:41.191866 kernel: raid6: int64x4 gen() 3849 MB/s Oct 2 19:38:41.209866 kernel: raid6: int64x4 xor() 2162 MB/s Oct 2 19:38:41.227891 kernel: raid6: int64x2 gen() 3603 MB/s Oct 2 19:38:41.245889 kernel: raid6: int64x2 xor() 1902 MB/s Oct 2 19:38:41.263890 kernel: raid6: int64x1 gen() 2755 MB/s Oct 2 19:38:41.283561 kernel: raid6: int64x1 xor() 1424 MB/s Oct 2 19:38:41.283630 kernel: raid6: using algorithm neonx4 gen() 6621 MB/s Oct 2 19:38:41.283666 kernel: raid6: .... xor() 4639 MB/s, rmw enabled Oct 2 19:38:41.285496 kernel: raid6: using neon recovery algorithm Oct 2 19:38:41.305895 kernel: xor: measuring software checksum speed Oct 2 19:38:41.307889 kernel: 8regs : 9400 MB/sec Oct 2 19:38:41.310892 kernel: 32regs : 11148 MB/sec Oct 2 19:38:41.315010 kernel: arm64_neon : 9612 MB/sec Oct 2 19:38:41.315082 kernel: xor: using function: 32regs (11148 MB/sec) Oct 2 19:38:41.409910 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:38:41.452797 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:38:41.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:41.461000 audit: BPF prog-id=7 op=LOAD Oct 2 19:38:41.463906 kernel: audit: type=1130 audit(1696275521.453:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:41.464682 systemd[1]: Starting systemd-udevd.service... Oct 2 19:38:41.470105 kernel: audit: type=1334 audit(1696275521.461:10): prog-id=7 op=LOAD Oct 2 19:38:41.461000 audit: BPF prog-id=8 op=LOAD Oct 2 19:38:41.506085 systemd-udevd[509]: Using default interface naming scheme 'v252'. Oct 2 19:38:41.518398 systemd[1]: Started systemd-udevd.service. Oct 2 19:38:41.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:41.524129 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:38:41.590041 dracut-pre-trigger[513]: rd.md=0: removing MD RAID activation Oct 2 19:38:41.712564 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:38:41.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:41.717760 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:38:41.839182 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:38:41.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:41.985901 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:38:41.985981 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 2 19:38:42.005648 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 19:38:42.005975 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 19:38:42.017895 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 2 19:38:42.020881 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 19:38:42.027013 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ae:54:49:ff:dd Oct 2 19:38:42.030872 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 19:38:42.037028 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:38:42.037078 kernel: GPT:9289727 != 16777215 Oct 2 19:38:42.039354 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:38:42.040736 kernel: GPT:9289727 != 16777215 Oct 2 19:38:42.042730 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:38:42.044331 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:38:42.048748 (udev-worker)[557]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:38:42.136875 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (566) Oct 2 19:38:42.259042 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:38:42.295178 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:38:42.340413 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:38:42.410568 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:38:42.416630 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:38:42.422756 systemd[1]: Starting disk-uuid.service... Oct 2 19:38:42.446391 disk-uuid[670]: Primary Header is updated. Oct 2 19:38:42.446391 disk-uuid[670]: Secondary Entries is updated. Oct 2 19:38:42.446391 disk-uuid[670]: Secondary Header is updated. Oct 2 19:38:42.456894 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:38:42.464878 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:38:42.475866 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:38:43.473621 disk-uuid[671]: The operation has completed successfully. Oct 2 19:38:43.475962 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:38:43.772483 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:38:43.772754 systemd[1]: Finished disk-uuid.service. Oct 2 19:38:43.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:43.779825 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 2 19:38:43.779911 kernel: audit: type=1130 audit(1696275523.775:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:43.780734 systemd[1]: Starting verity-setup.service... Oct 2 19:38:43.796610 kernel: audit: type=1131 audit(1696275523.777:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:43.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:43.842888 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:38:43.963423 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:38:43.970009 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:38:43.985681 systemd[1]: Finished verity-setup.service. Oct 2 19:38:43.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:43.996895 kernel: audit: type=1130 audit(1696275523.984:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.082886 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:38:44.084721 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:38:44.085489 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:38:44.101449 systemd[1]: Starting ignition-setup.service... Oct 2 19:38:44.106997 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:38:44.144945 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:38:44.145023 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:38:44.147558 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:38:44.162940 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:38:44.205103 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:38:44.252225 systemd[1]: Finished ignition-setup.service. Oct 2 19:38:44.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.265230 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:38:44.270429 kernel: audit: type=1130 audit(1696275524.253:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.493997 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:38:44.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.507533 systemd[1]: Starting systemd-networkd.service... Oct 2 19:38:44.496000 audit: BPF prog-id=9 op=LOAD Oct 2 19:38:44.511149 kernel: audit: type=1130 audit(1696275524.495:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.514881 kernel: audit: type=1334 audit(1696275524.496:20): prog-id=9 op=LOAD Oct 2 19:38:44.570221 systemd-networkd[1194]: lo: Link UP Oct 2 19:38:44.570662 systemd-networkd[1194]: lo: Gained carrier Oct 2 19:38:44.571726 systemd-networkd[1194]: Enumeration completed Oct 2 19:38:44.571893 systemd[1]: Started systemd-networkd.service. Oct 2 19:38:44.572444 systemd-networkd[1194]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:38:44.583116 systemd-networkd[1194]: eth0: Link UP Oct 2 19:38:44.600076 kernel: audit: type=1130 audit(1696275524.584:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.583130 systemd-networkd[1194]: eth0: Gained carrier Oct 2 19:38:44.585390 systemd[1]: Reached target network.target. Oct 2 19:38:44.595797 systemd[1]: Starting iscsiuio.service... Oct 2 19:38:44.614057 systemd-networkd[1194]: eth0: DHCPv4 address 172.31.27.230/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:38:44.623812 systemd[1]: Started iscsiuio.service. Oct 2 19:38:44.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.627298 systemd[1]: Starting iscsid.service... Oct 2 19:38:44.642886 kernel: audit: type=1130 audit(1696275524.624:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.650896 iscsid[1203]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:38:44.650896 iscsid[1203]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:38:44.650896 iscsid[1203]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:38:44.650896 iscsid[1203]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:38:44.669610 iscsid[1203]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:38:44.669610 iscsid[1203]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:38:44.684462 systemd[1]: Started iscsid.service. Oct 2 19:38:44.700441 kernel: audit: type=1130 audit(1696275524.685:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.688505 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:38:44.738383 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:38:44.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.742184 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:38:44.769266 kernel: audit: type=1130 audit(1696275524.740:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.752897 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:38:44.754738 systemd[1]: Reached target remote-fs.target. Oct 2 19:38:44.757758 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:38:44.793923 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:38:44.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.862160 ignition[1117]: Ignition 2.14.0 Oct 2 19:38:44.862193 ignition[1117]: Stage: fetch-offline Oct 2 19:38:44.862600 ignition[1117]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:38:44.863903 ignition[1117]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:38:44.883075 ignition[1117]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:38:44.885635 ignition[1117]: Ignition finished successfully Oct 2 19:38:44.888956 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:38:44.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:44.894274 systemd[1]: Starting ignition-fetch.service... Oct 2 19:38:44.923559 ignition[1218]: Ignition 2.14.0 Oct 2 19:38:44.923591 ignition[1218]: Stage: fetch Oct 2 19:38:44.924067 ignition[1218]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:38:44.924226 ignition[1218]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:38:44.940984 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:38:44.943355 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:38:44.950694 ignition[1218]: INFO : PUT result: OK Oct 2 19:38:44.954243 ignition[1218]: DEBUG : parsed url from cmdline: "" Oct 2 19:38:44.954243 ignition[1218]: INFO : no config URL provided Oct 2 19:38:44.954243 ignition[1218]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:38:44.961050 ignition[1218]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 19:38:44.961050 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:38:44.961050 ignition[1218]: INFO : PUT result: OK Oct 2 19:38:44.961050 ignition[1218]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 19:38:44.970291 ignition[1218]: INFO : GET result: OK Oct 2 19:38:44.974982 ignition[1218]: DEBUG : parsing config with SHA512: a6d2d952001799fbe608fbb0138da3770ee1f2244f1242d60c00fe17015e0b356b49aa065b78e8f4aaa00656dde874b7c7164fcffd069f213c0e95c1d8c99609 Oct 2 19:38:45.000459 unknown[1218]: fetched base config from "system" Oct 2 19:38:45.000489 unknown[1218]: fetched base config from "system" Oct 2 19:38:45.000504 unknown[1218]: fetched user config from "aws" Oct 2 19:38:45.007387 ignition[1218]: fetch: fetch complete Oct 2 19:38:45.008289 ignition[1218]: fetch: fetch passed Oct 2 19:38:45.008405 ignition[1218]: Ignition finished successfully Oct 2 19:38:45.014311 systemd[1]: Finished ignition-fetch.service. Oct 2 19:38:45.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:45.019251 systemd[1]: Starting ignition-kargs.service... Oct 2 19:38:45.053489 ignition[1224]: Ignition 2.14.0 Oct 2 19:38:45.053520 ignition[1224]: Stage: kargs Oct 2 19:38:45.053930 ignition[1224]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:38:45.053993 ignition[1224]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:38:45.069561 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:38:45.072354 ignition[1224]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:38:45.076399 ignition[1224]: INFO : PUT result: OK Oct 2 19:38:45.084827 ignition[1224]: kargs: kargs passed Oct 2 19:38:45.086650 ignition[1224]: Ignition finished successfully Oct 2 19:38:45.090367 systemd[1]: Finished ignition-kargs.service. Oct 2 19:38:45.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:45.093974 systemd[1]: Starting ignition-disks.service... Oct 2 19:38:45.126216 ignition[1230]: Ignition 2.14.0 Oct 2 19:38:45.126247 ignition[1230]: Stage: disks Oct 2 19:38:45.126633 ignition[1230]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:38:45.126694 ignition[1230]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:38:45.142779 ignition[1230]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:38:45.145342 ignition[1230]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:38:45.148859 ignition[1230]: INFO : PUT result: OK Oct 2 19:38:45.154465 ignition[1230]: disks: disks passed Oct 2 19:38:45.154602 ignition[1230]: Ignition finished successfully Oct 2 19:38:45.159224 systemd[1]: Finished ignition-disks.service. Oct 2 19:38:45.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:45.162630 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:38:45.166383 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:38:45.169763 systemd[1]: Reached target local-fs.target. Oct 2 19:38:45.172986 systemd[1]: Reached target sysinit.target. Oct 2 19:38:45.176145 systemd[1]: Reached target basic.target. Oct 2 19:38:45.180918 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:38:45.240447 systemd-fsck[1238]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:38:45.251359 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:38:45.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:45.254873 systemd[1]: Mounting sysroot.mount... Oct 2 19:38:45.289499 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:38:45.289364 systemd[1]: Mounted sysroot.mount. Oct 2 19:38:45.291314 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:38:45.304705 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:38:45.307191 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:38:45.307325 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:38:45.307398 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:38:45.338497 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:38:45.357324 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:38:45.362825 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:38:45.386888 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1255) Oct 2 19:38:45.394004 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:38:45.394070 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:38:45.394094 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:38:45.400031 initrd-setup-root[1260]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:38:45.405857 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:38:45.414911 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:38:45.426079 initrd-setup-root[1286]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:38:45.446021 initrd-setup-root[1294]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:38:45.466351 initrd-setup-root[1302]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:38:45.696186 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:38:45.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:45.701129 systemd[1]: Starting ignition-mount.service... Oct 2 19:38:45.704074 systemd[1]: Starting sysroot-boot.service... Oct 2 19:38:45.717027 systemd-networkd[1194]: eth0: Gained IPv6LL Oct 2 19:38:45.738578 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:38:45.738758 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:38:45.769415 systemd[1]: Finished sysroot-boot.service. Oct 2 19:38:45.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:45.780263 ignition[1321]: INFO : Ignition 2.14.0 Oct 2 19:38:45.783874 ignition[1321]: INFO : Stage: mount Oct 2 19:38:45.783874 ignition[1321]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:38:45.783874 ignition[1321]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:38:45.799717 ignition[1321]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:38:45.799717 ignition[1321]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:38:45.806300 ignition[1321]: INFO : PUT result: OK Oct 2 19:38:45.812431 ignition[1321]: INFO : mount: mount passed Oct 2 19:38:45.814255 ignition[1321]: INFO : Ignition finished successfully Oct 2 19:38:45.817698 systemd[1]: Finished ignition-mount.service. Oct 2 19:38:45.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:45.822500 systemd[1]: Starting ignition-files.service... Oct 2 19:38:45.846533 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:38:45.870887 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1330) Oct 2 19:38:45.876757 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:38:45.876808 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:38:45.876847 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:38:45.885872 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:38:45.891644 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:38:45.930759 ignition[1349]: INFO : Ignition 2.14.0 Oct 2 19:38:45.930759 ignition[1349]: INFO : Stage: files Oct 2 19:38:45.934451 ignition[1349]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:38:45.934451 ignition[1349]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:38:45.950358 ignition[1349]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:38:45.953522 ignition[1349]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:38:45.957592 ignition[1349]: INFO : PUT result: OK Oct 2 19:38:45.976745 ignition[1349]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:38:45.980778 ignition[1349]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:38:45.983733 ignition[1349]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:38:46.016815 ignition[1349]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:38:46.019809 ignition[1349]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:38:46.023569 unknown[1349]: wrote ssh authorized keys file for user: core Oct 2 19:38:46.026114 ignition[1349]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:38:46.030636 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:38:46.034798 ignition[1349]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:38:46.203236 ignition[1349]: INFO : GET result: OK Oct 2 19:38:46.616767 ignition[1349]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:38:46.621962 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:38:46.621962 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:38:46.621962 ignition[1349]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-arm64.tar.gz: attempt #1 Oct 2 19:38:46.718864 ignition[1349]: INFO : GET result: OK Oct 2 19:38:46.879212 ignition[1349]: DEBUG : file matches expected sum of: ebd055e9b2888624d006decd582db742131ed815d059d529ba21eaf864becca98a84b20a10eec91051b9d837c6855d28d5042bf5e9a454f4540aec6b82d37e96 Oct 2 19:38:46.884400 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:38:46.884400 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:38:46.884400 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:38:46.902400 ignition[1349]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1519939592" Oct 2 19:38:46.909442 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1353) Oct 2 19:38:46.909479 ignition[1349]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1519939592": device or resource busy Oct 2 19:38:46.909479 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1519939592", trying btrfs: device or resource busy Oct 2 19:38:46.909479 ignition[1349]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1519939592" Oct 2 19:38:46.921250 ignition[1349]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1519939592" Oct 2 19:38:46.934357 ignition[1349]: INFO : op(3): [started] unmounting "/mnt/oem1519939592" Oct 2 19:38:46.936754 ignition[1349]: INFO : op(3): [finished] unmounting "/mnt/oem1519939592" Oct 2 19:38:46.936754 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:38:46.942700 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:38:46.942700 ignition[1349]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:38:46.951405 systemd[1]: mnt-oem1519939592.mount: Deactivated successfully. Oct 2 19:38:47.022990 ignition[1349]: INFO : GET result: OK Oct 2 19:38:48.417718 ignition[1349]: DEBUG : file matches expected sum of: daab8965a4f617d1570d04c031ab4d55fff6aa13a61f0e4045f2338947f9fb0ee3a80fdee57cfe86db885390595460342181e1ec52b89f127ef09c393ae3db7f Oct 2 19:38:48.423270 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:38:48.423270 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:38:48.423270 ignition[1349]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:38:48.467413 ignition[1349]: INFO : GET result: OK Oct 2 19:38:50.364117 ignition[1349]: DEBUG : file matches expected sum of: 7b872a34d86e8aa75455a62a20f5cf16426de2ae54ffb8e0250fead920838df818201b8512c2f8bf4c939e5b21babab371f3a48803e2e861da9e6f8cdd022324 Oct 2 19:38:50.369478 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:38:50.369478 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:38:50.369478 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:38:50.369478 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:38:50.384235 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:38:50.384235 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:38:50.384235 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:38:50.402471 ignition[1349]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4218769591" Oct 2 19:38:50.405558 ignition[1349]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4218769591": device or resource busy Oct 2 19:38:50.405558 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4218769591", trying btrfs: device or resource busy Oct 2 19:38:50.405558 ignition[1349]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4218769591" Oct 2 19:38:50.405558 ignition[1349]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4218769591" Oct 2 19:38:50.424885 ignition[1349]: INFO : op(6): [started] unmounting "/mnt/oem4218769591" Oct 2 19:38:50.424885 ignition[1349]: INFO : op(6): [finished] unmounting "/mnt/oem4218769591" Oct 2 19:38:50.424885 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:38:50.424885 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:38:50.424885 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:38:50.441017 systemd[1]: mnt-oem4218769591.mount: Deactivated successfully. Oct 2 19:38:50.458708 ignition[1349]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3219411059" Oct 2 19:38:50.461784 ignition[1349]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3219411059": device or resource busy Oct 2 19:38:50.461784 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3219411059", trying btrfs: device or resource busy Oct 2 19:38:50.461784 ignition[1349]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3219411059" Oct 2 19:38:50.481622 ignition[1349]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3219411059" Oct 2 19:38:50.486413 ignition[1349]: INFO : op(9): [started] unmounting "/mnt/oem3219411059" Oct 2 19:38:50.491383 systemd[1]: mnt-oem3219411059.mount: Deactivated successfully. Oct 2 19:38:50.495426 ignition[1349]: INFO : op(9): [finished] unmounting "/mnt/oem3219411059" Oct 2 19:38:50.498026 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:38:50.501873 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:38:50.501873 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:38:50.527794 ignition[1349]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2911161734" Oct 2 19:38:50.530941 ignition[1349]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2911161734": device or resource busy Oct 2 19:38:50.530941 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2911161734", trying btrfs: device or resource busy Oct 2 19:38:50.530941 ignition[1349]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2911161734" Oct 2 19:38:50.542970 ignition[1349]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2911161734" Oct 2 19:38:50.542970 ignition[1349]: INFO : op(c): [started] unmounting "/mnt/oem2911161734" Oct 2 19:38:50.548235 ignition[1349]: INFO : op(c): [finished] unmounting "/mnt/oem2911161734" Oct 2 19:38:50.548235 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:38:50.554446 ignition[1349]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:38:50.554446 ignition[1349]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:38:50.554446 ignition[1349]: INFO : files: op(e): [started] processing unit "amazon-ssm-agent.service" Oct 2 19:38:50.564388 ignition[1349]: INFO : files: op(e): op(f): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:38:50.571210 ignition[1349]: INFO : files: op(e): op(f): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:38:50.576177 ignition[1349]: INFO : files: op(e): [finished] processing unit "amazon-ssm-agent.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(10): [started] processing unit "nvidia.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(10): [finished] processing unit "nvidia.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(15): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(15): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(16): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(16): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:38:50.581769 ignition[1349]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:38:50.648096 ignition[1349]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:38:50.648096 ignition[1349]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:38:50.648096 ignition[1349]: INFO : files: files passed Oct 2 19:38:50.648096 ignition[1349]: INFO : Ignition finished successfully Oct 2 19:38:50.664825 systemd[1]: Finished ignition-files.service. Oct 2 19:38:50.686929 kernel: kauditd_printk_skb: 9 callbacks suppressed Oct 2 19:38:50.687004 kernel: audit: type=1130 audit(1696275530.666:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.678220 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:38:50.680375 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:38:50.682083 systemd[1]: Starting ignition-quench.service... Oct 2 19:38:50.705409 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:38:50.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.705601 systemd[1]: Finished ignition-quench.service. Oct 2 19:38:50.727521 kernel: audit: type=1130 audit(1696275530.707:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.727563 kernel: audit: type=1131 audit(1696275530.717:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.741992 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:38:50.747119 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:38:50.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.751581 systemd[1]: Reached target ignition-complete.target. Oct 2 19:38:50.777239 kernel: audit: type=1130 audit(1696275530.750:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.764509 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:38:50.817167 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:38:50.817907 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:38:50.853129 kernel: audit: type=1130 audit(1696275530.819:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.853205 kernel: audit: type=1131 audit(1696275530.827:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.829101 systemd[1]: Reached target initrd-fs.target. Oct 2 19:38:50.838254 systemd[1]: Reached target initrd.target. Oct 2 19:38:50.841497 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:38:50.843147 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:38:50.887989 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:38:50.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.893644 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:38:50.903297 kernel: audit: type=1130 audit(1696275530.888:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.923061 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:38:50.926698 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:38:50.930665 systemd[1]: Stopped target timers.target. Oct 2 19:38:50.933884 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:38:50.936137 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:38:50.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.939727 systemd[1]: Stopped target initrd.target. Oct 2 19:38:50.949763 kernel: audit: type=1131 audit(1696275530.937:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.950104 systemd[1]: Stopped target basic.target. Oct 2 19:38:50.955311 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:38:50.957738 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:38:50.961164 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:38:50.963290 systemd[1]: Stopped target remote-fs.target. Oct 2 19:38:50.989513 kernel: audit: type=1131 audit(1696275530.978:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.965261 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:38:50.967331 systemd[1]: Stopped target sysinit.target. Oct 2 19:38:50.969256 systemd[1]: Stopped target local-fs.target. Oct 2 19:38:51.008711 kernel: audit: type=1131 audit(1696275530.996:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.971171 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:38:51.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.974904 systemd[1]: Stopped target swap.target. Oct 2 19:38:51.029596 iscsid[1203]: iscsid shutting down. Oct 2 19:38:51.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:50.976631 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:38:50.976960 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:38:50.985069 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:38:50.996197 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:38:50.996496 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:38:50.998730 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:38:51.006581 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:38:51.079247 ignition[1387]: INFO : Ignition 2.14.0 Oct 2 19:38:51.079247 ignition[1387]: INFO : Stage: umount Oct 2 19:38:51.079247 ignition[1387]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:38:51.079247 ignition[1387]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:38:51.009363 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:38:51.010189 systemd[1]: Stopped ignition-files.service. Oct 2 19:38:51.013747 systemd[1]: Stopping ignition-mount.service... Oct 2 19:38:51.015569 systemd[1]: Stopping iscsid.service... Oct 2 19:38:51.125922 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:38:51.125922 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:38:51.030489 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:38:51.136925 ignition[1387]: INFO : PUT result: OK Oct 2 19:38:51.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.035694 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:38:51.036048 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:38:51.038252 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:38:51.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.038457 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:38:51.128789 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:38:51.170261 ignition[1387]: INFO : umount: umount passed Oct 2 19:38:51.170261 ignition[1387]: INFO : Ignition finished successfully Oct 2 19:38:51.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.129444 systemd[1]: Stopped iscsid.service. Oct 2 19:38:51.149996 systemd[1]: Stopping iscsiuio.service... Oct 2 19:38:51.156812 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:38:51.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.157201 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:38:51.165655 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:38:51.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.167812 systemd[1]: Stopped iscsiuio.service. Oct 2 19:38:51.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.176949 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:38:51.177225 systemd[1]: Stopped ignition-mount.service. Oct 2 19:38:51.180926 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:38:51.182481 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:38:51.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.186346 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:38:51.186462 systemd[1]: Stopped ignition-disks.service. Oct 2 19:38:51.188409 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:38:51.188603 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:38:51.192271 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:38:51.192382 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:38:51.195696 systemd[1]: Stopped target network.target. Oct 2 19:38:51.197487 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:38:51.243000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:38:51.197621 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:38:51.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.198996 systemd[1]: Stopped target paths.target. Oct 2 19:38:51.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.199503 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:38:51.202963 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:38:51.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.206185 systemd[1]: Stopped target slices.target. Oct 2 19:38:51.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.207896 systemd[1]: Stopped target sockets.target. Oct 2 19:38:51.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.210137 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:38:51.210234 systemd[1]: Closed iscsid.socket. Oct 2 19:38:51.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.213480 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:38:51.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.213580 systemd[1]: Closed iscsiuio.socket. Oct 2 19:38:51.216294 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:38:51.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.216411 systemd[1]: Stopped ignition-setup.service. Oct 2 19:38:51.218309 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:38:51.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:51.218415 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:38:51.221993 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:38:51.225101 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:38:51.231567 systemd-networkd[1194]: eth0: DHCPv6 lease lost Oct 2 19:38:51.361000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:38:51.232259 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:38:51.232516 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:38:51.248118 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:38:51.248352 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:38:51.251688 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:38:51.251772 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:38:51.255113 systemd[1]: Stopping network-cleanup.service... Oct 2 19:38:51.256623 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:38:51.256734 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:38:51.262633 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:38:51.413871 systemd-journald[308]: Received SIGTERM from PID 1 (n/a). Oct 2 19:38:51.262735 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:38:51.264616 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:38:51.265785 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:38:51.269857 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:38:51.281121 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:38:51.281391 systemd[1]: Stopped network-cleanup.service. Oct 2 19:38:51.285657 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:38:51.286199 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:38:51.288398 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:38:51.288480 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:38:51.290358 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:38:51.290431 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:38:51.292556 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:38:51.293004 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:38:51.298292 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:38:51.298392 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:38:51.301890 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:38:51.302005 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:38:51.305244 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:38:51.307086 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:38:51.307251 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:38:51.317382 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:38:51.317487 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:38:51.320955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:38:51.321077 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:38:51.335654 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:38:51.335857 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:38:51.340427 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:38:51.345405 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:38:51.373854 systemd[1]: Switching root. Oct 2 19:38:51.436146 systemd-journald[308]: Journal stopped Oct 2 19:38:57.187222 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:38:57.187798 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:38:57.189134 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:38:57.189197 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:38:57.189230 kernel: SELinux: policy capability open_perms=1 Oct 2 19:38:57.189269 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:38:57.189301 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:38:57.189330 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:38:57.189361 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:38:57.189457 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:38:57.189492 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:38:57.189530 systemd[1]: Successfully loaded SELinux policy in 113.975ms. Oct 2 19:38:57.189756 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.385ms. Oct 2 19:38:57.189798 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:38:57.189872 systemd[1]: Detected virtualization amazon. Oct 2 19:38:57.190018 systemd[1]: Detected architecture arm64. Oct 2 19:38:57.196440 systemd[1]: Detected first boot. Oct 2 19:38:57.196515 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:38:57.196549 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:38:57.196583 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:38:57.196621 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:38:57.196731 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:38:57.196771 kernel: kauditd_printk_skb: 40 callbacks suppressed Oct 2 19:38:57.196804 kernel: audit: type=1334 audit(1696275536.686:84): prog-id=12 op=LOAD Oct 2 19:38:57.196858 kernel: audit: type=1334 audit(1696275536.686:85): prog-id=3 op=UNLOAD Oct 2 19:38:57.196892 kernel: audit: type=1334 audit(1696275536.686:86): prog-id=13 op=LOAD Oct 2 19:38:57.196988 kernel: audit: type=1334 audit(1696275536.692:87): prog-id=14 op=LOAD Oct 2 19:38:57.197041 kernel: audit: type=1334 audit(1696275536.692:88): prog-id=4 op=UNLOAD Oct 2 19:38:57.197073 kernel: audit: type=1334 audit(1696275536.692:89): prog-id=5 op=UNLOAD Oct 2 19:38:57.197109 kernel: audit: type=1334 audit(1696275536.695:90): prog-id=15 op=LOAD Oct 2 19:38:57.197228 kernel: audit: type=1334 audit(1696275536.695:91): prog-id=12 op=UNLOAD Oct 2 19:38:57.197266 kernel: audit: type=1334 audit(1696275536.697:92): prog-id=16 op=LOAD Oct 2 19:38:57.197300 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:38:57.197332 kernel: audit: type=1334 audit(1696275536.700:93): prog-id=17 op=LOAD Oct 2 19:38:57.199441 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:38:57.199572 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:38:57.199607 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:38:57.199641 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:38:57.199678 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:38:57.199708 systemd[1]: Created slice system-getty.slice. Oct 2 19:38:57.199742 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:38:57.199774 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:38:57.199808 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:38:57.199913 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:38:57.199949 systemd[1]: Created slice user.slice. Oct 2 19:38:57.199986 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:38:57.200017 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:38:57.200047 systemd[1]: Set up automount boot.automount. Oct 2 19:38:57.200080 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:38:57.200112 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:38:57.200142 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:38:57.200172 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:38:57.200201 systemd[1]: Reached target integritysetup.target. Oct 2 19:38:57.200234 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:38:57.200267 systemd[1]: Reached target remote-fs.target. Oct 2 19:38:57.200301 systemd[1]: Reached target slices.target. Oct 2 19:38:57.200333 systemd[1]: Reached target swap.target. Oct 2 19:38:57.200364 systemd[1]: Reached target torcx.target. Oct 2 19:38:57.200395 systemd[1]: Reached target veritysetup.target. Oct 2 19:38:57.200425 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:38:57.200455 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:38:57.200486 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:38:57.200518 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:38:57.200548 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:38:57.200583 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:38:57.200612 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:38:57.200642 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:38:57.200673 systemd[1]: Mounting media.mount... Oct 2 19:38:57.200707 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:38:57.200737 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:38:57.200766 systemd[1]: Mounting tmp.mount... Oct 2 19:38:57.200795 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:38:57.200825 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:38:57.200906 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:38:57.200939 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:38:57.200971 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:38:57.201001 systemd[1]: Starting modprobe@drm.service... Oct 2 19:38:57.201053 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:38:57.201084 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:38:57.201116 systemd[1]: Starting modprobe@loop.service... Oct 2 19:38:57.201149 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:38:57.201179 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:38:57.201216 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:38:57.201249 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:38:57.201280 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:38:57.201314 systemd[1]: Stopped systemd-journald.service. Oct 2 19:38:57.201343 systemd[1]: Starting systemd-journald.service... Oct 2 19:38:57.201373 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:38:57.201405 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:38:57.201441 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:38:57.201472 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:38:57.201503 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:38:57.201538 systemd[1]: Stopped verity-setup.service. Oct 2 19:38:57.201567 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:38:57.201597 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:38:57.201626 systemd[1]: Mounted media.mount. Oct 2 19:38:57.201656 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:38:57.201685 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:38:57.201715 systemd[1]: Mounted tmp.mount. Oct 2 19:38:57.201745 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:38:57.212184 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:38:57.212227 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:38:57.212260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:38:57.212291 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:38:57.212324 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:38:57.212355 systemd[1]: Finished modprobe@drm.service. Oct 2 19:38:57.212389 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:38:57.212419 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:38:57.212449 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:38:57.212481 systemd[1]: Reached target network-pre.target. Oct 2 19:38:57.212511 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:38:57.212543 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:38:57.212572 kernel: fuse: init (API version 7.34) Oct 2 19:38:57.212603 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:38:57.212635 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:38:57.212668 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:38:57.212700 systemd-journald[1498]: Journal started Oct 2 19:38:57.212798 systemd-journald[1498]: Runtime Journal (/run/log/journal/ec2df4f863818626ab583a9d83df9869) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:38:52.176000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:38:52.362000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:38:52.362000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:38:52.362000 audit: BPF prog-id=10 op=LOAD Oct 2 19:38:52.363000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:38:52.363000 audit: BPF prog-id=11 op=LOAD Oct 2 19:38:52.363000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:38:56.686000 audit: BPF prog-id=12 op=LOAD Oct 2 19:38:56.686000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:38:56.686000 audit: BPF prog-id=13 op=LOAD Oct 2 19:38:56.692000 audit: BPF prog-id=14 op=LOAD Oct 2 19:38:56.692000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:38:56.692000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:38:56.695000 audit: BPF prog-id=15 op=LOAD Oct 2 19:38:56.695000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:38:56.697000 audit: BPF prog-id=16 op=LOAD Oct 2 19:38:56.700000 audit: BPF prog-id=17 op=LOAD Oct 2 19:38:56.700000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:38:56.700000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:38:56.702000 audit: BPF prog-id=18 op=LOAD Oct 2 19:38:56.702000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:38:56.705000 audit: BPF prog-id=19 op=LOAD Oct 2 19:38:56.708000 audit: BPF prog-id=20 op=LOAD Oct 2 19:38:56.708000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:38:56.708000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:38:56.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:56.715000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:38:56.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:56.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.014000 audit: BPF prog-id=21 op=LOAD Oct 2 19:38:57.015000 audit: BPF prog-id=22 op=LOAD Oct 2 19:38:57.015000 audit: BPF prog-id=23 op=LOAD Oct 2 19:38:57.015000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:38:57.015000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:38:57.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.182000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:38:57.182000 audit[1498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd70e5ef0 a2=4000 a3=1 items=0 ppid=1 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:57.182000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:38:57.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:52.581383 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:38:56.685750 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:38:52.582800 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:38:56.710421 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:38:52.582873 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:38:52.582943 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:38:57.228121 systemd[1]: Started systemd-journald.service. Oct 2 19:38:57.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:52.582969 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:38:57.225923 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:38:57.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:52.583037 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:38:57.228493 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:38:52.583068 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:38:57.230990 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:38:52.583492 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:38:52.583578 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:38:57.234439 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:38:52.583614 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:38:57.237535 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:38:52.584690 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:38:57.270075 kernel: loop: module loaded Oct 2 19:38:57.243671 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:38:52.584778 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:38:57.245533 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:38:52.584825 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:38:57.247796 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:38:52.584900 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:38:57.252426 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:38:52.584949 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:38:57.269478 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:38:52.584987 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:38:55.781928 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:55Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:38:55.782492 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:55Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:38:55.782723 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:55Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:38:55.783229 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:55Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:38:55.783341 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:55Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:38:55.783472 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2023-10-02T19:38:55Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:38:57.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.280343 systemd[1]: Finished modprobe@loop.service. Oct 2 19:38:57.286273 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:38:57.322595 systemd-journald[1498]: Time spent on flushing to /var/log/journal/ec2df4f863818626ab583a9d83df9869 is 71.232ms for 1146 entries. Oct 2 19:38:57.322595 systemd-journald[1498]: System Journal (/var/log/journal/ec2df4f863818626ab583a9d83df9869) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:38:57.406580 systemd-journald[1498]: Received client request to flush runtime journal. Oct 2 19:38:57.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.335991 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:38:57.346562 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:38:57.348766 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:38:57.409206 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:38:57.443403 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:38:57.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.449096 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:38:57.473748 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:38:57.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.478003 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:38:57.494715 udevadm[1533]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:38:57.644143 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:38:57.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:57.648561 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:38:57.754178 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:38:57.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:58.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:58.251000 audit: BPF prog-id=24 op=LOAD Oct 2 19:38:58.251000 audit: BPF prog-id=25 op=LOAD Oct 2 19:38:58.251000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:38:58.251000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:38:58.250407 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:38:58.254600 systemd[1]: Starting systemd-udevd.service... Oct 2 19:38:58.303040 systemd-udevd[1542]: Using default interface naming scheme 'v252'. Oct 2 19:38:58.336329 systemd[1]: Started systemd-udevd.service. Oct 2 19:38:58.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:58.338000 audit: BPF prog-id=26 op=LOAD Oct 2 19:38:58.346447 systemd[1]: Starting systemd-networkd.service... Oct 2 19:38:58.351000 audit: BPF prog-id=27 op=LOAD Oct 2 19:38:58.351000 audit: BPF prog-id=28 op=LOAD Oct 2 19:38:58.351000 audit: BPF prog-id=29 op=LOAD Oct 2 19:38:58.354495 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:38:58.487164 systemd[1]: Started systemd-userdbd.service. Oct 2 19:38:58.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:58.515257 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:38:58.541035 (udev-worker)[1551]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:38:58.689906 systemd-networkd[1549]: lo: Link UP Oct 2 19:38:58.689933 systemd-networkd[1549]: lo: Gained carrier Oct 2 19:38:58.690942 systemd-networkd[1549]: Enumeration completed Oct 2 19:38:58.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:58.691142 systemd[1]: Started systemd-networkd.service. Oct 2 19:38:58.695076 systemd-networkd[1549]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:38:58.700896 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:38:58.701924 systemd-networkd[1549]: eth0: Link UP Oct 2 19:38:58.702297 systemd-networkd[1549]: eth0: Gained carrier Oct 2 19:38:58.703923 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:38:58.735166 systemd-networkd[1549]: eth0: DHCPv4 address 172.31.27.230/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:38:58.815882 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1548) Oct 2 19:38:59.017145 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:38:59.024522 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:38:59.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.028674 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:38:59.077340 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:38:59.115778 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:38:59.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.118019 systemd[1]: Reached target cryptsetup.target. Oct 2 19:38:59.122127 systemd[1]: Starting lvm2-activation.service... Oct 2 19:38:59.136685 lvm[1662]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:38:59.176858 systemd[1]: Finished lvm2-activation.service. Oct 2 19:38:59.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.178960 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:38:59.180815 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:38:59.181048 systemd[1]: Reached target local-fs.target. Oct 2 19:38:59.182883 systemd[1]: Reached target machines.target. Oct 2 19:38:59.187099 systemd[1]: Starting ldconfig.service... Oct 2 19:38:59.189755 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:38:59.190148 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:38:59.193507 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:38:59.197797 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:38:59.204676 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:38:59.206882 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:38:59.207007 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:38:59.209335 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:38:59.244657 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1664 (bootctl) Oct 2 19:38:59.247072 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:38:59.266453 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:38:59.270306 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:38:59.275024 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:38:59.279502 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:38:59.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.374900 systemd-fsck[1673]: fsck.fat 4.2 (2021-01-31) Oct 2 19:38:59.374900 systemd-fsck[1673]: /dev/nvme0n1p1: 236 files, 113463/258078 clusters Oct 2 19:38:59.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.379001 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:38:59.384173 systemd[1]: Mounting boot.mount... Oct 2 19:38:59.408632 systemd[1]: Mounted boot.mount. Oct 2 19:38:59.438200 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:38:59.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.643878 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:38:59.648413 systemd[1]: Starting audit-rules.service... Oct 2 19:38:59.655708 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:38:59.663224 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:38:59.665000 audit: BPF prog-id=30 op=LOAD Oct 2 19:38:59.672000 audit: BPF prog-id=31 op=LOAD Oct 2 19:38:59.670241 systemd[1]: Starting systemd-resolved.service... Oct 2 19:38:59.678179 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:38:59.682142 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:38:59.718152 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:38:59.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.720347 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:38:59.743000 audit[1692]: SYSTEM_BOOT pid=1692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.753664 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:38:59.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.802775 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:38:59.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:59.910391 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:38:59.912608 systemd[1]: Reached target time-set.target. Oct 2 19:38:59.938146 systemd-resolved[1690]: Positive Trust Anchors: Oct 2 19:38:59.938677 systemd-resolved[1690]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:38:59.938914 systemd-resolved[1690]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:38:59.960000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:38:59.960000 audit[1708]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd939e8e0 a2=420 a3=0 items=0 ppid=1687 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:59.960000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:38:59.963069 augenrules[1708]: No rules Oct 2 19:38:59.964429 systemd[1]: Finished audit-rules.service. Oct 2 19:38:59.982496 systemd-resolved[1690]: Defaulting to hostname 'linux'. Oct 2 19:38:59.986748 systemd[1]: Started systemd-resolved.service. Oct 2 19:38:59.988739 systemd[1]: Reached target network.target. Oct 2 19:38:59.990489 systemd[1]: Reached target nss-lookup.target. Oct 2 19:38:59.569797 systemd-resolved[1690]: Clock change detected. Flushing caches. Oct 2 19:38:59.887278 systemd-journald[1498]: Time jumped backwards, rotating. Oct 2 19:38:59.569816 systemd-timesyncd[1691]: Contacted time server 152.70.159.102:123 (0.flatcar.pool.ntp.org). Oct 2 19:38:59.569926 systemd-timesyncd[1691]: Initial clock synchronization to Mon 2023-10-02 19:38:59.569612 UTC. Oct 2 19:38:59.750351 systemd-networkd[1549]: eth0: Gained IPv6LL Oct 2 19:38:59.755653 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:38:59.758099 systemd[1]: Reached target network-online.target. Oct 2 19:38:59.974706 ldconfig[1663]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:38:59.984940 systemd[1]: Finished ldconfig.service. Oct 2 19:38:59.990974 systemd[1]: Starting systemd-update-done.service... Oct 2 19:39:00.009811 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:39:00.013354 systemd[1]: Finished systemd-update-done.service. Oct 2 19:39:00.016123 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:39:00.018703 systemd[1]: Reached target sysinit.target. Oct 2 19:39:00.020601 systemd[1]: Started motdgen.path. Oct 2 19:39:00.022319 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:39:00.025030 systemd[1]: Started logrotate.timer. Oct 2 19:39:00.027096 systemd[1]: Started mdadm.timer. Oct 2 19:39:00.028738 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:39:00.030719 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:39:00.030787 systemd[1]: Reached target paths.target. Oct 2 19:39:00.032629 systemd[1]: Reached target timers.target. Oct 2 19:39:00.034874 systemd[1]: Listening on dbus.socket. Oct 2 19:39:00.038777 systemd[1]: Starting docker.socket... Oct 2 19:39:00.050984 systemd[1]: Listening on sshd.socket. Oct 2 19:39:00.052834 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:39:00.053729 systemd[1]: Listening on docker.socket. Oct 2 19:39:00.055546 systemd[1]: Reached target sockets.target. Oct 2 19:39:00.057292 systemd[1]: Reached target basic.target. Oct 2 19:39:00.059072 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:39:00.059167 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:39:00.061376 systemd[1]: Started amazon-ssm-agent.service. Oct 2 19:39:00.072113 systemd[1]: Starting containerd.service... Oct 2 19:39:00.076320 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:39:00.083218 systemd[1]: Starting dbus.service... Oct 2 19:39:00.087104 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:39:00.092312 systemd[1]: Starting extend-filesystems.service... Oct 2 19:39:00.094047 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:39:00.097608 systemd[1]: Starting motdgen.service... Oct 2 19:39:00.102529 systemd[1]: Started nvidia.service. Oct 2 19:39:00.106696 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:39:00.111605 systemd[1]: Starting prepare-critools.service... Oct 2 19:39:00.116709 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:39:00.121931 systemd[1]: Starting sshd-keygen.service... Oct 2 19:39:00.132232 systemd[1]: Starting systemd-logind.service... Oct 2 19:39:00.133930 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:39:00.134061 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:39:00.135069 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:39:00.139964 systemd[1]: Starting update-engine.service... Oct 2 19:39:00.144698 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:39:00.172348 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:39:00.177822 jq[1726]: false Oct 2 19:39:00.172722 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:39:00.215975 jq[1736]: true Oct 2 19:39:00.232798 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:39:00.233193 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:39:00.340939 tar[1741]: crictl Oct 2 19:39:00.362324 jq[1739]: true Oct 2 19:39:00.369049 tar[1743]: ./ Oct 2 19:39:00.400192 tar[1743]: ./macvlan Oct 2 19:39:00.407484 dbus-daemon[1725]: [system] SELinux support is enabled Oct 2 19:39:00.419500 systemd[1]: Started dbus.service. Oct 2 19:39:00.424938 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:39:00.424991 systemd[1]: Reached target system-config.target. Oct 2 19:39:00.427130 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:39:00.427218 systemd[1]: Reached target user-config.target. Oct 2 19:39:00.489556 dbus-daemon[1725]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1549 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:39:00.496390 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:39:00.511564 extend-filesystems[1727]: Found nvme0n1 Oct 2 19:39:00.513996 amazon-ssm-agent[1722]: 2023/10/02 19:39:00 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 19:39:00.515186 extend-filesystems[1727]: Found nvme0n1p1 Oct 2 19:39:00.525359 extend-filesystems[1727]: Found nvme0n1p2 Oct 2 19:39:00.527134 extend-filesystems[1727]: Found nvme0n1p3 Oct 2 19:39:00.533206 extend-filesystems[1727]: Found usr Oct 2 19:39:00.533206 extend-filesystems[1727]: Found nvme0n1p4 Oct 2 19:39:00.533206 extend-filesystems[1727]: Found nvme0n1p6 Oct 2 19:39:00.533206 extend-filesystems[1727]: Found nvme0n1p7 Oct 2 19:39:00.533206 extend-filesystems[1727]: Found nvme0n1p9 Oct 2 19:39:00.533206 extend-filesystems[1727]: Checking size of /dev/nvme0n1p9 Oct 2 19:39:00.555184 amazon-ssm-agent[1722]: Initializing new seelog logger Oct 2 19:39:00.556189 amazon-ssm-agent[1722]: New Seelog Logger Creation Complete Oct 2 19:39:00.556351 amazon-ssm-agent[1722]: 2023/10/02 19:39:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:39:00.556351 amazon-ssm-agent[1722]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:39:00.558183 amazon-ssm-agent[1722]: 2023/10/02 19:39:00 processing appconfig overrides Oct 2 19:39:00.572998 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:39:00.573419 systemd[1]: Finished motdgen.service. Oct 2 19:39:00.655965 extend-filesystems[1727]: Resized partition /dev/nvme0n1p9 Oct 2 19:39:00.695642 extend-filesystems[1789]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:39:00.705180 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 19:39:00.707413 tar[1743]: ./static Oct 2 19:39:00.751447 update_engine[1735]: I1002 19:39:00.746949 1735 main.cc:92] Flatcar Update Engine starting Oct 2 19:39:00.763212 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 19:39:00.775327 systemd[1]: Started update-engine.service. Oct 2 19:39:00.779837 update_engine[1735]: I1002 19:39:00.779789 1735 update_check_scheduler.cc:74] Next update check in 6m18s Oct 2 19:39:00.817074 systemd[1]: Started locksmithd.service. Oct 2 19:39:00.823707 extend-filesystems[1789]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 19:39:00.823707 extend-filesystems[1789]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 19:39:00.823707 extend-filesystems[1789]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 19:39:00.838373 extend-filesystems[1727]: Resized filesystem in /dev/nvme0n1p9 Oct 2 19:39:00.828064 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:39:00.828821 systemd[1]: Finished extend-filesystems.service. Oct 2 19:39:00.860657 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:39:00.923841 bash[1816]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:39:00.925893 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:39:01.037584 systemd-logind[1734]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:39:01.038028 tar[1743]: ./vlan Oct 2 19:39:01.042850 systemd-logind[1734]: New seat seat0. Oct 2 19:39:01.054298 systemd[1]: Started systemd-logind.service. Oct 2 19:39:01.072821 env[1750]: time="2023-10-02T19:39:01.072738561Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:39:01.129093 dbus-daemon[1725]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:39:01.129715 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:39:01.147106 dbus-daemon[1725]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1768 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:39:01.152243 systemd[1]: Starting polkit.service... Oct 2 19:39:01.226437 polkitd[1824]: Started polkitd version 121 Oct 2 19:39:01.277585 env[1750]: time="2023-10-02T19:39:01.277341394Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:39:01.277779 polkitd[1824]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:39:01.277925 polkitd[1824]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:39:01.280679 env[1750]: time="2023-10-02T19:39:01.280595854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:01.289606 polkitd[1824]: Finished loading, compiling and executing 2 rules Oct 2 19:39:01.290945 dbus-daemon[1725]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:39:01.291251 systemd[1]: Started polkit.service. Oct 2 19:39:01.294287 polkitd[1824]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:39:01.311551 env[1750]: time="2023-10-02T19:39:01.311454970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:39:01.312057 env[1750]: time="2023-10-02T19:39:01.312008158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:01.313712 env[1750]: time="2023-10-02T19:39:01.313613314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:39:01.313967 env[1750]: time="2023-10-02T19:39:01.313921258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:01.314137 env[1750]: time="2023-10-02T19:39:01.314094190Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:39:01.314355 env[1750]: time="2023-10-02T19:39:01.314311918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:01.314744 env[1750]: time="2023-10-02T19:39:01.314694718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:01.316031 env[1750]: time="2023-10-02T19:39:01.315969226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:01.321891 env[1750]: time="2023-10-02T19:39:01.321821182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:39:01.325316 env[1750]: time="2023-10-02T19:39:01.325257982Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:39:01.325680 env[1750]: time="2023-10-02T19:39:01.325618042Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:39:01.326533 env[1750]: time="2023-10-02T19:39:01.326483050Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:39:01.335257 systemd-hostnamed[1768]: Hostname set to (transient) Oct 2 19:39:01.335448 systemd-resolved[1690]: System hostname changed to 'ip-172-31-27-230'. Oct 2 19:39:01.339254 env[1750]: time="2023-10-02T19:39:01.339199414Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:39:01.339546 env[1750]: time="2023-10-02T19:39:01.339514498Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:39:01.339668 env[1750]: time="2023-10-02T19:39:01.339638530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:39:01.339861 env[1750]: time="2023-10-02T19:39:01.339811402Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:39:01.340196 env[1750]: time="2023-10-02T19:39:01.340115362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:39:01.340424 env[1750]: time="2023-10-02T19:39:01.340381378Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:39:01.340612 env[1750]: time="2023-10-02T19:39:01.340575358Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:39:01.341345 env[1750]: time="2023-10-02T19:39:01.341275810Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:39:01.341835 env[1750]: time="2023-10-02T19:39:01.341800378Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:39:01.341987 env[1750]: time="2023-10-02T19:39:01.341943994Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:39:01.342569 env[1750]: time="2023-10-02T19:39:01.342524074Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:39:01.342888 env[1750]: time="2023-10-02T19:39:01.342852538Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:39:01.348313 env[1750]: time="2023-10-02T19:39:01.348235858Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:39:01.349877 env[1750]: time="2023-10-02T19:39:01.349823566Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:39:01.352184 env[1750]: time="2023-10-02T19:39:01.352110454Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:39:01.353301 env[1750]: time="2023-10-02T19:39:01.353240326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.353962 env[1750]: time="2023-10-02T19:39:01.353910910Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:39:01.354364 env[1750]: time="2023-10-02T19:39:01.354316186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.356050 env[1750]: time="2023-10-02T19:39:01.355995742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.356888 env[1750]: time="2023-10-02T19:39:01.356832646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.357336 env[1750]: time="2023-10-02T19:39:01.357117082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.357567 tar[1743]: ./portmap Oct 2 19:39:01.357740 env[1750]: time="2023-10-02T19:39:01.357697834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.358921 env[1750]: time="2023-10-02T19:39:01.358862830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.360997 env[1750]: time="2023-10-02T19:39:01.360930082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.363274 env[1750]: time="2023-10-02T19:39:01.363213202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.365305 env[1750]: time="2023-10-02T19:39:01.365244766Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:39:01.368009 env[1750]: time="2023-10-02T19:39:01.367942822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.369966 env[1750]: time="2023-10-02T19:39:01.369901054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.373476 env[1750]: time="2023-10-02T19:39:01.373397962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.374124 env[1750]: time="2023-10-02T19:39:01.374057026Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:39:01.375453 env[1750]: time="2023-10-02T19:39:01.375380182Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:39:01.378784 env[1750]: time="2023-10-02T19:39:01.378712462Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:39:01.379481 env[1750]: time="2023-10-02T19:39:01.379416706Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:39:01.379788 env[1750]: time="2023-10-02T19:39:01.379740070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:39:01.383260 env[1750]: time="2023-10-02T19:39:01.383098642Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:39:01.386063 env[1750]: time="2023-10-02T19:39:01.386006770Z" level=info msg="Connect containerd service" Oct 2 19:39:01.386421 env[1750]: time="2023-10-02T19:39:01.386372650Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:39:01.396465 env[1750]: time="2023-10-02T19:39:01.396388594Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:39:01.404610 env[1750]: time="2023-10-02T19:39:01.404529982Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:39:01.404751 env[1750]: time="2023-10-02T19:39:01.404687314Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:39:01.406006 env[1750]: time="2023-10-02T19:39:01.404793994Z" level=info msg="containerd successfully booted in 0.341468s" Oct 2 19:39:01.404939 systemd[1]: Started containerd.service. Oct 2 19:39:01.408402 env[1750]: time="2023-10-02T19:39:01.408297658Z" level=info msg="Start subscribing containerd event" Oct 2 19:39:01.408554 env[1750]: time="2023-10-02T19:39:01.408417358Z" level=info msg="Start recovering state" Oct 2 19:39:01.408669 env[1750]: time="2023-10-02T19:39:01.408604114Z" level=info msg="Start event monitor" Oct 2 19:39:01.408957 env[1750]: time="2023-10-02T19:39:01.408807550Z" level=info msg="Start snapshots syncer" Oct 2 19:39:01.409029 env[1750]: time="2023-10-02T19:39:01.408957130Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:39:01.409128 env[1750]: time="2023-10-02T19:39:01.408980410Z" level=info msg="Start streaming server" Oct 2 19:39:01.553539 tar[1743]: ./host-local Oct 2 19:39:01.585902 coreos-metadata[1724]: Oct 02 19:39:01.585 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:39:01.588158 coreos-metadata[1724]: Oct 02 19:39:01.588 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 19:39:01.592270 coreos-metadata[1724]: Oct 02 19:39:01.592 INFO Fetch successful Oct 2 19:39:01.592433 coreos-metadata[1724]: Oct 02 19:39:01.592 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:39:01.595824 coreos-metadata[1724]: Oct 02 19:39:01.595 INFO Fetch successful Oct 2 19:39:01.599291 unknown[1724]: wrote ssh authorized keys file for user: core Oct 2 19:39:01.645349 update-ssh-keys[1865]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:39:01.646658 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:39:01.709551 amazon-ssm-agent[1722]: 2023-10-02 19:39:01 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-07bd67749a50f6f38 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-07bd67749a50f6f38 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:39:01.709551 amazon-ssm-agent[1722]: status code: 400, request id: dd56b31b-dfe8-41bc-a4a1-6478bfc9cec7 Oct 2 19:39:01.709551 amazon-ssm-agent[1722]: 2023-10-02 19:39:01 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 19:39:01.730617 tar[1743]: ./vrf Oct 2 19:39:01.850713 tar[1743]: ./bridge Oct 2 19:39:01.993020 tar[1743]: ./tuning Oct 2 19:39:02.026886 systemd[1]: Finished prepare-critools.service. Oct 2 19:39:02.080519 tar[1743]: ./firewall Oct 2 19:39:02.200407 tar[1743]: ./host-device Oct 2 19:39:02.286816 tar[1743]: ./sbr Oct 2 19:39:02.342900 tar[1743]: ./loopback Oct 2 19:39:02.398393 tar[1743]: ./dhcp Oct 2 19:39:02.548975 tar[1743]: ./ptp Oct 2 19:39:02.616003 tar[1743]: ./ipvlan Oct 2 19:39:02.680478 tar[1743]: ./bandwidth Oct 2 19:39:02.771400 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:39:02.901166 locksmithd[1809]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:39:07.806032 sshd_keygen[1759]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:39:07.872209 systemd[1]: Finished sshd-keygen.service. Oct 2 19:39:07.877271 systemd[1]: Starting issuegen.service... Oct 2 19:39:07.896695 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:39:07.897092 systemd[1]: Finished issuegen.service. Oct 2 19:39:07.902029 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:39:07.926768 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:39:07.931844 systemd[1]: Started getty@tty1.service. Oct 2 19:39:07.937744 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:39:07.940725 systemd[1]: Reached target getty.target. Oct 2 19:39:07.942774 systemd[1]: Reached target multi-user.target. Oct 2 19:39:07.947741 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:39:07.973453 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:39:07.974088 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:39:07.976813 systemd[1]: Startup finished in 1.218s (kernel) + 12.437s (initrd) + 16.361s (userspace) = 30.017s. Oct 2 19:39:09.379878 systemd[1]: Created slice system-sshd.slice. Oct 2 19:39:09.382270 systemd[1]: Started sshd@0-172.31.27.230:22-139.178.89.65:44144.service. Oct 2 19:39:09.603956 sshd[1934]: Accepted publickey for core from 139.178.89.65 port 44144 ssh2: RSA SHA256:7JXBxnRlPbGQmmbR+r/0ht2yJ3EtkuLQ82x2+HEbSLE Oct 2 19:39:09.609310 sshd[1934]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:09.628162 systemd[1]: Created slice user-500.slice. Oct 2 19:39:09.630557 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:39:09.639298 systemd-logind[1734]: New session 1 of user core. Oct 2 19:39:09.655712 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:39:09.659732 systemd[1]: Starting user@500.service... Oct 2 19:39:09.673042 (systemd)[1937]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:09.884007 systemd[1937]: Queued start job for default target default.target. Oct 2 19:39:09.885111 systemd[1937]: Reached target paths.target. Oct 2 19:39:09.885209 systemd[1937]: Reached target sockets.target. Oct 2 19:39:09.885246 systemd[1937]: Reached target timers.target. Oct 2 19:39:09.885276 systemd[1937]: Reached target basic.target. Oct 2 19:39:09.885374 systemd[1937]: Reached target default.target. Oct 2 19:39:09.885442 systemd[1937]: Startup finished in 193ms. Oct 2 19:39:09.886462 systemd[1]: Started user@500.service. Oct 2 19:39:09.888546 systemd[1]: Started session-1.scope. Oct 2 19:39:10.044913 systemd[1]: Started sshd@1-172.31.27.230:22-139.178.89.65:44150.service. Oct 2 19:39:10.233489 sshd[1946]: Accepted publickey for core from 139.178.89.65 port 44150 ssh2: RSA SHA256:7JXBxnRlPbGQmmbR+r/0ht2yJ3EtkuLQ82x2+HEbSLE Oct 2 19:39:10.236702 sshd[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:10.244670 systemd-logind[1734]: New session 2 of user core. Oct 2 19:39:10.246704 systemd[1]: Started session-2.scope. Oct 2 19:39:10.393553 sshd[1946]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:10.399824 systemd-logind[1734]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:39:10.400453 systemd[1]: sshd@1-172.31.27.230:22-139.178.89.65:44150.service: Deactivated successfully. Oct 2 19:39:10.401685 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:39:10.403082 systemd-logind[1734]: Removed session 2. Oct 2 19:39:10.425008 systemd[1]: Started sshd@2-172.31.27.230:22-139.178.89.65:44166.service. Oct 2 19:39:10.611065 sshd[1952]: Accepted publickey for core from 139.178.89.65 port 44166 ssh2: RSA SHA256:7JXBxnRlPbGQmmbR+r/0ht2yJ3EtkuLQ82x2+HEbSLE Oct 2 19:39:10.613923 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:10.624227 systemd[1]: Started session-3.scope. Oct 2 19:39:10.625114 systemd-logind[1734]: New session 3 of user core. Oct 2 19:39:10.761407 sshd[1952]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:10.768462 systemd[1]: sshd@2-172.31.27.230:22-139.178.89.65:44166.service: Deactivated successfully. Oct 2 19:39:10.769657 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:39:10.770808 systemd-logind[1734]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:39:10.772557 systemd-logind[1734]: Removed session 3. Oct 2 19:39:10.791381 systemd[1]: Started sshd@3-172.31.27.230:22-139.178.89.65:44180.service. Oct 2 19:39:10.973382 sshd[1958]: Accepted publickey for core from 139.178.89.65 port 44180 ssh2: RSA SHA256:7JXBxnRlPbGQmmbR+r/0ht2yJ3EtkuLQ82x2+HEbSLE Oct 2 19:39:10.977479 sshd[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:10.985888 systemd-logind[1734]: New session 4 of user core. Oct 2 19:39:10.986791 systemd[1]: Started session-4.scope. Oct 2 19:39:11.133941 sshd[1958]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:11.140222 systemd-logind[1734]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:39:11.141385 systemd[1]: sshd@3-172.31.27.230:22-139.178.89.65:44180.service: Deactivated successfully. Oct 2 19:39:11.142699 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:39:11.144199 systemd-logind[1734]: Removed session 4. Oct 2 19:39:11.166729 systemd[1]: Started sshd@4-172.31.27.230:22-139.178.89.65:44184.service. Oct 2 19:39:11.349076 sshd[1964]: Accepted publickey for core from 139.178.89.65 port 44184 ssh2: RSA SHA256:7JXBxnRlPbGQmmbR+r/0ht2yJ3EtkuLQ82x2+HEbSLE Oct 2 19:39:11.353227 sshd[1964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:11.362259 systemd[1]: Started session-5.scope. Oct 2 19:39:11.363488 systemd-logind[1734]: New session 5 of user core. Oct 2 19:39:11.513361 sudo[1967]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:39:11.514329 sudo[1967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:11.528493 dbus-daemon[1725]: avc: received setenforce notice (enforcing=1) Oct 2 19:39:11.532080 sudo[1967]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:11.556931 sshd[1964]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:11.563332 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:39:11.564555 systemd[1]: sshd@4-172.31.27.230:22-139.178.89.65:44184.service: Deactivated successfully. Oct 2 19:39:11.566472 systemd-logind[1734]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:39:11.568781 systemd-logind[1734]: Removed session 5. Oct 2 19:39:11.586798 systemd[1]: Started sshd@5-172.31.27.230:22-139.178.89.65:44200.service. Oct 2 19:39:11.771247 sshd[1971]: Accepted publickey for core from 139.178.89.65 port 44200 ssh2: RSA SHA256:7JXBxnRlPbGQmmbR+r/0ht2yJ3EtkuLQ82x2+HEbSLE Oct 2 19:39:11.773753 sshd[1971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:11.781233 systemd-logind[1734]: New session 6 of user core. Oct 2 19:39:11.782958 systemd[1]: Started session-6.scope. Oct 2 19:39:11.905192 sudo[1975]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:39:11.906434 sudo[1975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:11.914392 sudo[1975]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:11.927930 sudo[1974]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:39:11.928473 sudo[1974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:11.952019 systemd[1]: Stopping audit-rules.service... Oct 2 19:39:11.956000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:39:11.959253 kernel: kauditd_printk_skb: 78 callbacks suppressed Oct 2 19:39:11.959309 kernel: audit: type=1305 audit(1696275551.956:168): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:39:11.960370 auditctl[1978]: No rules Oct 2 19:39:11.965048 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:39:11.965458 systemd[1]: Stopped audit-rules.service. Oct 2 19:39:11.956000 audit[1978]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc97bc30 a2=420 a3=0 items=0 ppid=1 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:11.970000 systemd[1]: Starting audit-rules.service... Oct 2 19:39:11.956000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:39:11.978207 kernel: audit: type=1300 audit(1696275551.956:168): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc97bc30 a2=420 a3=0 items=0 ppid=1 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:11.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:11.989810 kernel: audit: type=1327 audit(1696275551.956:168): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:39:11.989906 kernel: audit: type=1131 audit(1696275551.964:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.038826 augenrules[1995]: No rules Oct 2 19:39:12.041137 systemd[1]: Finished audit-rules.service. Oct 2 19:39:12.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.051468 sudo[1974]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:12.050000 audit[1974]: USER_END pid=1974 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.061764 kernel: audit: type=1130 audit(1696275552.040:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.061857 kernel: audit: type=1106 audit(1696275552.050:171): pid=1974 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.050000 audit[1974]: CRED_DISP pid=1974 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.071187 kernel: audit: type=1104 audit(1696275552.050:172): pid=1974 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.083786 sshd[1971]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:12.084000 audit[1971]: USER_END pid=1971 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:12.098694 systemd[1]: sshd@5-172.31.27.230:22-139.178.89.65:44200.service: Deactivated successfully. Oct 2 19:39:12.084000 audit[1971]: CRED_DISP pid=1971 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:12.099879 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:39:12.107970 kernel: audit: type=1106 audit(1696275552.084:173): pid=1971 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:12.108074 kernel: audit: type=1104 audit(1696275552.084:174): pid=1971 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:12.109043 systemd-logind[1734]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:39:12.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.27.230:22-139.178.89.65:44200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.121200 kernel: audit: type=1131 audit(1696275552.097:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.27.230:22-139.178.89.65:44200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.118295 systemd[1]: Started sshd@6-172.31.27.230:22-139.178.89.65:44214.service. Oct 2 19:39:12.122662 systemd-logind[1734]: Removed session 6. Oct 2 19:39:12.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.27.230:22-139.178.89.65:44214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.299000 audit[2001]: USER_ACCT pid=2001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:12.301834 sshd[2001]: Accepted publickey for core from 139.178.89.65 port 44214 ssh2: RSA SHA256:7JXBxnRlPbGQmmbR+r/0ht2yJ3EtkuLQ82x2+HEbSLE Oct 2 19:39:12.302000 audit[2001]: CRED_ACQ pid=2001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:12.303000 audit[2001]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd73441b0 a2=3 a3=1 items=0 ppid=1 pid=2001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:12.303000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:39:12.305294 sshd[2001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:12.313654 systemd-logind[1734]: New session 7 of user core. Oct 2 19:39:12.314525 systemd[1]: Started session-7.scope. Oct 2 19:39:12.322000 audit[2001]: USER_START pid=2001 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:12.329000 audit[2003]: CRED_ACQ pid=2003 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:12.432000 audit[2004]: USER_ACCT pid=2004 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.434171 sudo[2004]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:39:12.433000 audit[2004]: CRED_REFR pid=2004 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.435242 sudo[2004]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:12.438000 audit[2004]: USER_START pid=2004 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.106338 systemd[1]: Reloading. Oct 2 19:39:13.281568 /usr/lib/systemd/system-generators/torcx-generator[2033]: time="2023-10-02T19:39:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:39:13.284295 /usr/lib/systemd/system-generators/torcx-generator[2033]: time="2023-10-02T19:39:13Z" level=info msg="torcx already run" Oct 2 19:39:13.548727 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:39:13.548773 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:39:13.594369 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:39:13.747000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.747000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.747000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit: BPF prog-id=40 op=LOAD Oct 2 19:39:13.748000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit: BPF prog-id=41 op=LOAD Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.748000 audit: BPF prog-id=42 op=LOAD Oct 2 19:39:13.748000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:39:13.749000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit: BPF prog-id=43 op=LOAD Oct 2 19:39:13.750000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit: BPF prog-id=44 op=LOAD Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.751000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.751000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.751000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.751000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.751000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.751000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.751000 audit: BPF prog-id=45 op=LOAD Oct 2 19:39:13.751000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:39:13.751000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:39:13.753000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.753000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.753000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.753000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.753000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.753000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.753000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.753000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.753000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.754000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.754000 audit: BPF prog-id=46 op=LOAD Oct 2 19:39:13.754000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:39:13.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.757000 audit: BPF prog-id=47 op=LOAD Oct 2 19:39:13.757000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:39:13.760000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.760000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.760000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.760000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.760000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.760000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.760000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.760000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.760000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.760000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.760000 audit: BPF prog-id=48 op=LOAD Oct 2 19:39:13.761000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit: BPF prog-id=49 op=LOAD Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.761000 audit: BPF prog-id=50 op=LOAD Oct 2 19:39:13.761000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:39:13.761000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:39:13.763000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.763000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.763000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.763000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.763000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.763000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.763000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.763000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.763000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.763000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.763000 audit: BPF prog-id=51 op=LOAD Oct 2 19:39:13.763000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit: BPF prog-id=52 op=LOAD Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit: BPF prog-id=53 op=LOAD Oct 2 19:39:13.764000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:39:13.764000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit: BPF prog-id=54 op=LOAD Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.765000 audit: BPF prog-id=55 op=LOAD Oct 2 19:39:13.765000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:39:13.765000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:39:13.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.769000 audit: BPF prog-id=56 op=LOAD Oct 2 19:39:13.769000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:39:13.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:13.774000 audit: BPF prog-id=57 op=LOAD Oct 2 19:39:13.775000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:39:13.812456 systemd[1]: Started kubelet.service. Oct 2 19:39:13.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.847256 systemd[1]: Starting coreos-metadata.service... Oct 2 19:39:13.982603 kubelet[2088]: E1002 19:39:13.982524 2088 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:39:13.986584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:39:13.986912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:39:13.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:39:14.041240 coreos-metadata[2096]: Oct 02 19:39:14.041 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:39:14.042717 coreos-metadata[2096]: Oct 02 19:39:14.042 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 19:39:14.043461 coreos-metadata[2096]: Oct 02 19:39:14.043 INFO Fetch successful Oct 2 19:39:14.043557 coreos-metadata[2096]: Oct 02 19:39:14.043 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 19:39:14.044237 coreos-metadata[2096]: Oct 02 19:39:14.044 INFO Fetch successful Oct 2 19:39:14.044357 coreos-metadata[2096]: Oct 02 19:39:14.044 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 19:39:14.045018 coreos-metadata[2096]: Oct 02 19:39:14.044 INFO Fetch successful Oct 2 19:39:14.045101 coreos-metadata[2096]: Oct 02 19:39:14.045 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 19:39:14.045767 coreos-metadata[2096]: Oct 02 19:39:14.045 INFO Fetch successful Oct 2 19:39:14.045847 coreos-metadata[2096]: Oct 02 19:39:14.045 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 19:39:14.046472 coreos-metadata[2096]: Oct 02 19:39:14.046 INFO Fetch successful Oct 2 19:39:14.046550 coreos-metadata[2096]: Oct 02 19:39:14.046 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 19:39:14.047202 coreos-metadata[2096]: Oct 02 19:39:14.047 INFO Fetch successful Oct 2 19:39:14.047279 coreos-metadata[2096]: Oct 02 19:39:14.047 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 19:39:14.047933 coreos-metadata[2096]: Oct 02 19:39:14.047 INFO Fetch successful Oct 2 19:39:14.048029 coreos-metadata[2096]: Oct 02 19:39:14.047 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 19:39:14.048801 coreos-metadata[2096]: Oct 02 19:39:14.048 INFO Fetch successful Oct 2 19:39:14.070414 systemd[1]: Finished coreos-metadata.service. Oct 2 19:39:14.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.600864 systemd[1]: Stopped kubelet.service. Oct 2 19:39:14.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.646050 systemd[1]: Reloading. Oct 2 19:39:14.852620 /usr/lib/systemd/system-generators/torcx-generator[2151]: time="2023-10-02T19:39:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:39:14.852714 /usr/lib/systemd/system-generators/torcx-generator[2151]: time="2023-10-02T19:39:14Z" level=info msg="torcx already run" Oct 2 19:39:15.092616 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:39:15.092659 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:39:15.136498 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit: BPF prog-id=58 op=LOAD Oct 2 19:39:15.284000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit: BPF prog-id=59 op=LOAD Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.285000 audit: BPF prog-id=60 op=LOAD Oct 2 19:39:15.285000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:39:15.285000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:39:15.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit: BPF prog-id=61 op=LOAD Oct 2 19:39:15.287000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit: BPF prog-id=62 op=LOAD Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.287000 audit: BPF prog-id=63 op=LOAD Oct 2 19:39:15.287000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:39:15.287000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:39:15.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.290000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.290000 audit: BPF prog-id=64 op=LOAD Oct 2 19:39:15.290000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:39:15.292000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.292000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.293000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.293000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.293000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.293000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.293000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.293000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.293000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.293000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.293000 audit: BPF prog-id=65 op=LOAD Oct 2 19:39:15.293000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit: BPF prog-id=66 op=LOAD Oct 2 19:39:15.297000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit: BPF prog-id=67 op=LOAD Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.297000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.298000 audit: BPF prog-id=68 op=LOAD Oct 2 19:39:15.298000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:39:15.298000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:39:15.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit: BPF prog-id=69 op=LOAD Oct 2 19:39:15.300000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit: BPF prog-id=70 op=LOAD Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.300000 audit: BPF prog-id=71 op=LOAD Oct 2 19:39:15.301000 audit: BPF prog-id=52 op=UNLOAD Oct 2 19:39:15.301000 audit: BPF prog-id=53 op=UNLOAD Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit: BPF prog-id=72 op=LOAD Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.301000 audit: BPF prog-id=73 op=LOAD Oct 2 19:39:15.301000 audit: BPF prog-id=54 op=UNLOAD Oct 2 19:39:15.302000 audit: BPF prog-id=55 op=UNLOAD Oct 2 19:39:15.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.305000 audit: BPF prog-id=74 op=LOAD Oct 2 19:39:15.306000 audit: BPF prog-id=56 op=UNLOAD Oct 2 19:39:15.311000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.311000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.311000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.311000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.311000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.311000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.311000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.311000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.311000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.311000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:15.311000 audit: BPF prog-id=75 op=LOAD Oct 2 19:39:15.311000 audit: BPF prog-id=57 op=UNLOAD Oct 2 19:39:15.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.344748 systemd[1]: Started kubelet.service. Oct 2 19:39:15.481280 kubelet[2207]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:39:15.486535 kubelet[2207]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:39:15.486535 kubelet[2207]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:39:15.486791 kubelet[2207]: I1002 19:39:15.486723 2207 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:39:15.489306 kubelet[2207]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:39:15.489306 kubelet[2207]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:39:15.489306 kubelet[2207]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:39:16.580218 kubelet[2207]: I1002 19:39:16.580124 2207 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:39:16.580218 kubelet[2207]: I1002 19:39:16.580195 2207 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:39:16.580880 kubelet[2207]: I1002 19:39:16.580595 2207 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:39:16.586229 kubelet[2207]: I1002 19:39:16.586194 2207 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:39:16.589622 kubelet[2207]: W1002 19:39:16.589568 2207 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:39:16.590864 kubelet[2207]: I1002 19:39:16.590810 2207 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:39:16.591444 kubelet[2207]: I1002 19:39:16.591391 2207 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:39:16.591604 kubelet[2207]: I1002 19:39:16.591542 2207 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:39:16.591849 kubelet[2207]: I1002 19:39:16.591800 2207 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:39:16.591849 kubelet[2207]: I1002 19:39:16.591849 2207 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:39:16.592046 kubelet[2207]: I1002 19:39:16.592019 2207 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:39:16.608169 kubelet[2207]: I1002 19:39:16.608084 2207 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:39:16.608169 kubelet[2207]: I1002 19:39:16.608164 2207 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:39:16.608429 kubelet[2207]: I1002 19:39:16.608319 2207 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:39:16.608429 kubelet[2207]: I1002 19:39:16.608357 2207 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:39:16.618175 kubelet[2207]: E1002 19:39:16.618100 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:16.620363 kubelet[2207]: E1002 19:39:16.620306 2207 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:16.621749 kubelet[2207]: I1002 19:39:16.621700 2207 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:39:16.622937 kubelet[2207]: W1002 19:39:16.622898 2207 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:39:16.624535 kubelet[2207]: I1002 19:39:16.624485 2207 server.go:1175] "Started kubelet" Oct 2 19:39:16.626000 audit[2207]: AVC avc: denied { mac_admin } for pid=2207 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:16.626000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:16.626000 audit[2207]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b59620 a1=40000591a0 a2=4000b595f0 a3=25 items=0 ppid=1 pid=2207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.626000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:16.628780 kubelet[2207]: I1002 19:39:16.628748 2207 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:39:16.627000 audit[2207]: AVC avc: denied { mac_admin } for pid=2207 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:16.627000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:16.627000 audit[2207]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400029db00 a1=40000591b8 a2=4000b596b0 a3=25 items=0 ppid=1 pid=2207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.627000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:16.629385 kubelet[2207]: I1002 19:39:16.629356 2207 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:39:16.629862 kubelet[2207]: I1002 19:39:16.629840 2207 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:39:16.638900 kubelet[2207]: I1002 19:39:16.638825 2207 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:39:16.640384 kubelet[2207]: I1002 19:39:16.640334 2207 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:39:16.647410 kubelet[2207]: E1002 19:39:16.647232 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1a9c721fe", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 624441854, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 624441854, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.649596 kubelet[2207]: E1002 19:39:16.649555 2207 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:39:16.649825 kubelet[2207]: E1002 19:39:16.649794 2207 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:39:16.651618 kubelet[2207]: W1002 19:39:16.651564 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.27.230" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:16.651991 kubelet[2207]: E1002 19:39:16.651957 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.27.230" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:16.652326 kubelet[2207]: W1002 19:39:16.652286 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:16.652529 kubelet[2207]: E1002 19:39:16.652501 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:16.655046 kubelet[2207]: E1002 19:39:16.654895 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1ab49a06e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 649771118, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 649771118, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.655539 kubelet[2207]: I1002 19:39:16.655509 2207 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:39:16.660792 kubelet[2207]: I1002 19:39:16.660738 2207 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:39:16.662215 kubelet[2207]: E1002 19:39:16.662130 2207 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.27.230" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:16.663439 kubelet[2207]: W1002 19:39:16.663383 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:16.663675 kubelet[2207]: E1002 19:39:16.663645 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:16.666301 kubelet[2207]: E1002 19:39:16.666262 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:16.690000 audit[2222]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.690000 audit[2222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff27e3f90 a2=0 a3=1 items=0 ppid=2207 pid=2222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:39:16.694000 audit[2226]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.694000 audit[2226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffc0c03520 a2=0 a3=1 items=0 ppid=2207 pid=2226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.694000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:39:16.719979 kubelet[2207]: I1002 19:39:16.719944 2207 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:39:16.720305 kubelet[2207]: I1002 19:39:16.720279 2207 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:39:16.720440 kubelet[2207]: I1002 19:39:16.720419 2207 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:39:16.720848 kubelet[2207]: E1002 19:39:16.720719 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af635346", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.230 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718564166, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718564166, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.722093 kubelet[2207]: E1002 19:39:16.721943 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63761a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.230 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718573082, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718573082, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.723327 kubelet[2207]: E1002 19:39:16.723199 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63a2a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.230 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718584482, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718584482, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.725400 kubelet[2207]: I1002 19:39:16.725352 2207 policy_none.go:49] "None policy: Start" Oct 2 19:39:16.726954 kubelet[2207]: I1002 19:39:16.726902 2207 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:39:16.726954 kubelet[2207]: I1002 19:39:16.726960 2207 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:39:16.701000 audit[2228]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.701000 audit[2228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffda75df50 a2=0 a3=1 items=0 ppid=2207 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:39:16.738736 systemd[1]: Created slice kubepods.slice. Oct 2 19:39:16.739000 audit[2233]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.739000 audit[2233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd485f0b0 a2=0 a3=1 items=0 ppid=2207 pid=2233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:39:16.750279 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:39:16.757592 kubelet[2207]: E1002 19:39:16.757529 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:16.758755 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:39:16.762013 kubelet[2207]: I1002 19:39:16.761355 2207 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.230" Oct 2 19:39:16.763212 kubelet[2207]: E1002 19:39:16.763100 2207 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.230" Oct 2 19:39:16.765381 kubelet[2207]: E1002 19:39:16.765257 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af635346", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.230 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718564166, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 761287275, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af635346" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.767647 kubelet[2207]: E1002 19:39:16.767356 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63761a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.230 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718573082, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 761299011, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63761a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.771764 kubelet[2207]: E1002 19:39:16.771550 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63a2a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.230 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718584482, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 761303775, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63a2a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.774666 kubelet[2207]: E1002 19:39:16.774531 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1b29a9ec7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 772519623, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 772519623, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.776975 kubelet[2207]: I1002 19:39:16.776939 2207 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:39:16.776000 audit[2207]: AVC avc: denied { mac_admin } for pid=2207 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:16.776000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:16.776000 audit[2207]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000edcc90 a1=4000c795d8 a2=4000edcc60 a3=25 items=0 ppid=1 pid=2207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.776000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:16.777803 kubelet[2207]: I1002 19:39:16.777772 2207 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:39:16.778299 kubelet[2207]: I1002 19:39:16.778273 2207 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:39:16.780277 kubelet[2207]: E1002 19:39:16.780225 2207 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.27.230\" not found" Oct 2 19:39:16.821000 audit[2240]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.821000 audit[2240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe55d1f00 a2=0 a3=1 items=0 ppid=2207 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:39:16.825000 audit[2241]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.825000 audit[2241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffde947650 a2=0 a3=1 items=0 ppid=2207 pid=2241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:39:16.841000 audit[2244]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.841000 audit[2244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd7515570 a2=0 a3=1 items=0 ppid=2207 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.841000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:39:16.858156 kubelet[2207]: E1002 19:39:16.858095 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:16.858000 audit[2247]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.858000 audit[2247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc94e8320 a2=0 a3=1 items=0 ppid=2207 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:39:16.862000 audit[2248]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.862000 audit[2248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff1d4f9d0 a2=0 a3=1 items=0 ppid=2207 pid=2248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:39:16.864543 kubelet[2207]: E1002 19:39:16.864322 2207 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.27.230" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:16.867000 audit[2249]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.867000 audit[2249]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1707ee0 a2=0 a3=1 items=0 ppid=2207 pid=2249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:39:16.876000 audit[2251]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.876000 audit[2251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe870daa0 a2=0 a3=1 items=0 ppid=2207 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:39:16.884000 audit[2253]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.884000 audit[2253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc587b7d0 a2=0 a3=1 items=0 ppid=2207 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:39:16.925000 audit[2256]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.925000 audit[2256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffebcc6370 a2=0 a3=1 items=0 ppid=2207 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:39:16.935000 audit[2258]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.935000 audit[2258]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff1259220 a2=0 a3=1 items=0 ppid=2207 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.935000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:39:16.959000 kubelet[2207]: E1002 19:39:16.958942 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:16.958000 audit[2261]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.961603 kernel: kauditd_printk_skb: 483 callbacks suppressed Oct 2 19:39:16.961729 kernel: audit: type=1325 audit(1696275556.958:620): table=nat:16 family=2 entries=1 op=nft_register_rule pid=2261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.965257 kubelet[2207]: I1002 19:39:16.965215 2207 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.230" Oct 2 19:39:16.958000 audit[2261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffdec9be60 a2=0 a3=1 items=0 ppid=2207 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.967790 kubelet[2207]: E1002 19:39:16.967748 2207 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.230" Oct 2 19:39:16.968060 kubelet[2207]: I1002 19:39:16.968012 2207 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:39:16.978992 kernel: audit: type=1300 audit(1696275556.958:620): arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffdec9be60 a2=0 a3=1 items=0 ppid=2207 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.979117 kernel: audit: type=1327 audit(1696275556.958:620): proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:39:16.958000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:39:16.979660 kubelet[2207]: E1002 19:39:16.979534 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af635346", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.230 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718564166, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 965135260, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af635346" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.991654 kubelet[2207]: E1002 19:39:16.991521 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63761a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.230 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718573082, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 965172004, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63761a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:16.971000 audit[2262]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=2262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:16.998925 kernel: audit: type=1325 audit(1696275556.971:621): table=mangle:17 family=10 entries=2 op=nft_register_chain pid=2262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:16.971000 audit[2262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff03cf040 a2=0 a3=1 items=0 ppid=2207 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.010700 kernel: audit: type=1300 audit(1696275556.971:621): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff03cf040 a2=0 a3=1 items=0 ppid=2207 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.971000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:39:17.018980 kernel: audit: type=1327 audit(1696275556.971:621): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:39:16.977000 audit[2263]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=2263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:17.025388 kernel: audit: type=1325 audit(1696275556.977:622): table=mangle:18 family=2 entries=1 op=nft_register_chain pid=2263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:17.025500 kernel: audit: type=1300 audit(1696275556.977:622): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6ea4fa0 a2=0 a3=1 items=0 ppid=2207 pid=2263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.977000 audit[2263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6ea4fa0 a2=0 a3=1 items=0 ppid=2207 pid=2263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.977000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:39:17.037668 kubelet[2207]: E1002 19:39:17.037524 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63a2a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.230 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718584482, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 965177476, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63a2a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:17.043305 kernel: audit: type=1327 audit(1696275556.977:622): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:39:17.043492 kernel: audit: type=1325 audit(1696275556.983:623): table=nat:19 family=10 entries=2 op=nft_register_chain pid=2264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:16.983000 audit[2264]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=2264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:16.983000 audit[2264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd2e8fa00 a2=0 a3=1 items=0 ppid=2207 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.983000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:39:16.989000 audit[2265]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=2265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:16.989000 audit[2265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee56f510 a2=0 a3=1 items=0 ppid=2207 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:16.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:39:17.013000 audit[2267]: NETFILTER_CFG table=nat:21 family=10 entries=1 op=nft_register_rule pid=2267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.013000 audit[2267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd8f88900 a2=0 a3=1 items=0 ppid=2207 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.013000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:39:17.018000 audit[2269]: NETFILTER_CFG table=filter:22 family=10 entries=2 op=nft_register_chain pid=2269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.018000 audit[2269]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffc8720540 a2=0 a3=1 items=0 ppid=2207 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:39:17.018000 audit[2268]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:17.018000 audit[2268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffde2c3f10 a2=0 a3=1 items=0 ppid=2207 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:39:17.029000 audit[2271]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.029000 audit[2271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffff5fdb8e0 a2=0 a3=1 items=0 ppid=2207 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.029000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:39:17.034000 audit[2272]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.034000 audit[2272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc5697320 a2=0 a3=1 items=0 ppid=2207 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:39:17.039000 audit[2273]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.039000 audit[2273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf857530 a2=0 a3=1 items=0 ppid=2207 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:39:17.053000 audit[2275]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.053000 audit[2275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcefe5770 a2=0 a3=1 items=0 ppid=2207 pid=2275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:39:17.059604 kubelet[2207]: E1002 19:39:17.059558 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:17.062000 audit[2277]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.062000 audit[2277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe87b4ec0 a2=0 a3=1 items=0 ppid=2207 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.062000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:39:17.072000 audit[2279]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.072000 audit[2279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffc77751c0 a2=0 a3=1 items=0 ppid=2207 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:39:17.083000 audit[2281]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.083000 audit[2281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff40fc270 a2=0 a3=1 items=0 ppid=2207 pid=2281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:39:17.096000 audit[2283]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.096000 audit[2283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffe8a57810 a2=0 a3=1 items=0 ppid=2207 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:39:17.102498 kubelet[2207]: I1002 19:39:17.102459 2207 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:39:17.102816 kubelet[2207]: I1002 19:39:17.102780 2207 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:39:17.103046 kubelet[2207]: I1002 19:39:17.103011 2207 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:39:17.103315 kubelet[2207]: E1002 19:39:17.103287 2207 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:39:17.105861 kubelet[2207]: W1002 19:39:17.105824 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:17.106095 kubelet[2207]: E1002 19:39:17.106073 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:17.105000 audit[2284]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.105000 audit[2284]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff58943e0 a2=0 a3=1 items=0 ppid=2207 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.105000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:39:17.109000 audit[2285]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.109000 audit[2285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9598fd0 a2=0 a3=1 items=0 ppid=2207 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.109000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:39:17.113000 audit[2286]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:17.113000 audit[2286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2b52c10 a2=0 a3=1 items=0 ppid=2207 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:17.113000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:39:17.160134 kubelet[2207]: E1002 19:39:17.160070 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:17.261190 kubelet[2207]: E1002 19:39:17.261121 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:17.266698 kubelet[2207]: E1002 19:39:17.266650 2207 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.27.230" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:17.362475 kubelet[2207]: E1002 19:39:17.362342 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:17.369899 kubelet[2207]: I1002 19:39:17.369844 2207 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.230" Oct 2 19:39:17.371764 kubelet[2207]: E1002 19:39:17.371707 2207 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.230" Oct 2 19:39:17.371987 kubelet[2207]: E1002 19:39:17.371674 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af635346", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.230 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718564166, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 17, 369789518, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af635346" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:17.428461 kubelet[2207]: E1002 19:39:17.428253 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63761a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.230 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718573082, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 17, 369798338, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63761a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:17.463716 kubelet[2207]: E1002 19:39:17.463597 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:17.564606 kubelet[2207]: E1002 19:39:17.564506 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:17.620811 kubelet[2207]: E1002 19:39:17.620606 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:17.627333 kubelet[2207]: E1002 19:39:17.627132 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63a2a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.230 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718584482, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 17, 369805430, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63a2a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:17.659964 kubelet[2207]: W1002 19:39:17.659925 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:17.660262 kubelet[2207]: E1002 19:39:17.660237 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:17.665265 kubelet[2207]: E1002 19:39:17.665209 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:17.748473 kubelet[2207]: W1002 19:39:17.748415 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.27.230" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:17.748621 kubelet[2207]: E1002 19:39:17.748492 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.27.230" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:17.765834 kubelet[2207]: E1002 19:39:17.765785 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:17.866816 kubelet[2207]: E1002 19:39:17.866768 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:17.967689 kubelet[2207]: E1002 19:39:17.967557 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:18.068538 kubelet[2207]: E1002 19:39:18.068480 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:18.069006 kubelet[2207]: E1002 19:39:18.068960 2207 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.27.230" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:18.091924 kubelet[2207]: W1002 19:39:18.091891 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:18.092091 kubelet[2207]: E1002 19:39:18.092070 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:18.155691 kubelet[2207]: W1002 19:39:18.155628 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:18.155883 kubelet[2207]: E1002 19:39:18.155707 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:18.169045 kubelet[2207]: E1002 19:39:18.168982 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:18.173475 kubelet[2207]: I1002 19:39:18.173429 2207 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.230" Oct 2 19:39:18.175126 kubelet[2207]: E1002 19:39:18.175065 2207 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.230" Oct 2 19:39:18.175409 kubelet[2207]: E1002 19:39:18.175056 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af635346", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.230 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718564166, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 18, 173301974, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af635346" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:18.177273 kubelet[2207]: E1002 19:39:18.177119 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63761a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.230 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718573082, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 18, 173342402, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63761a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:18.227763 kubelet[2207]: E1002 19:39:18.227516 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63a2a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.230 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718584482, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 18, 173347790, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63a2a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:18.270188 kubelet[2207]: E1002 19:39:18.270101 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:18.370839 kubelet[2207]: E1002 19:39:18.370790 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:18.471579 kubelet[2207]: E1002 19:39:18.471528 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:18.572400 kubelet[2207]: E1002 19:39:18.572251 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:18.621292 kubelet[2207]: E1002 19:39:18.621243 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:18.673328 kubelet[2207]: E1002 19:39:18.673277 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:18.773987 kubelet[2207]: E1002 19:39:18.773919 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:18.874905 kubelet[2207]: E1002 19:39:18.874749 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:18.975442 kubelet[2207]: E1002 19:39:18.975371 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:19.076001 kubelet[2207]: E1002 19:39:19.075945 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:19.176914 kubelet[2207]: E1002 19:39:19.176764 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:19.277451 kubelet[2207]: E1002 19:39:19.277363 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:19.338162 kubelet[2207]: W1002 19:39:19.338062 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:19.338162 kubelet[2207]: E1002 19:39:19.338131 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:19.377787 kubelet[2207]: E1002 19:39:19.377714 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:19.478529 kubelet[2207]: E1002 19:39:19.478356 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:19.578928 kubelet[2207]: E1002 19:39:19.578840 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:19.622793 kubelet[2207]: E1002 19:39:19.622727 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:19.671264 kubelet[2207]: E1002 19:39:19.671205 2207 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.27.230" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:19.679564 kubelet[2207]: E1002 19:39:19.679502 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:19.777773 kubelet[2207]: I1002 19:39:19.777631 2207 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.230" Oct 2 19:39:19.779045 kubelet[2207]: E1002 19:39:19.778865 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af635346", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.230 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718564166, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 19, 777010866, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af635346" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:19.779841 kubelet[2207]: E1002 19:39:19.779775 2207 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.230" Oct 2 19:39:19.779998 kubelet[2207]: E1002 19:39:19.779876 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:19.780586 kubelet[2207]: E1002 19:39:19.780426 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63761a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.230 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718573082, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 19, 777042126, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63761a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:19.781964 kubelet[2207]: E1002 19:39:19.781787 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63a2a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.230 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718584482, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 19, 777048798, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63a2a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:19.880755 kubelet[2207]: E1002 19:39:19.880677 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:19.903551 kubelet[2207]: W1002 19:39:19.903505 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:19.903806 kubelet[2207]: E1002 19:39:19.903780 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:19.949488 kubelet[2207]: W1002 19:39:19.949417 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:19.949488 kubelet[2207]: E1002 19:39:19.949480 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:19.981921 kubelet[2207]: E1002 19:39:19.981856 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:20.082722 kubelet[2207]: E1002 19:39:20.082575 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:20.183634 kubelet[2207]: E1002 19:39:20.183561 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:20.284530 kubelet[2207]: E1002 19:39:20.284456 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:20.371072 kubelet[2207]: W1002 19:39:20.370938 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.27.230" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:20.371355 kubelet[2207]: E1002 19:39:20.371330 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.27.230" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:20.384739 kubelet[2207]: E1002 19:39:20.384690 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:20.485660 kubelet[2207]: E1002 19:39:20.485611 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:20.586940 kubelet[2207]: E1002 19:39:20.586890 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:20.623785 kubelet[2207]: E1002 19:39:20.623643 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:20.687912 kubelet[2207]: E1002 19:39:20.687797 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:20.788996 kubelet[2207]: E1002 19:39:20.788931 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:20.889996 kubelet[2207]: E1002 19:39:20.889831 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:20.990780 kubelet[2207]: E1002 19:39:20.990696 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:21.091986 kubelet[2207]: E1002 19:39:21.091914 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:21.193064 kubelet[2207]: E1002 19:39:21.192900 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:21.293870 kubelet[2207]: E1002 19:39:21.293788 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:21.394823 kubelet[2207]: E1002 19:39:21.394752 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:21.495793 kubelet[2207]: E1002 19:39:21.495662 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:21.596719 kubelet[2207]: E1002 19:39:21.596664 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:21.625333 kubelet[2207]: E1002 19:39:21.625263 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:21.697642 kubelet[2207]: E1002 19:39:21.697577 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:21.780382 kubelet[2207]: E1002 19:39:21.780246 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:21.798771 kubelet[2207]: E1002 19:39:21.798701 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:21.899693 kubelet[2207]: E1002 19:39:21.899630 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.000536 kubelet[2207]: E1002 19:39:22.000467 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.101836 kubelet[2207]: E1002 19:39:22.101655 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.202687 kubelet[2207]: E1002 19:39:22.202621 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.303470 kubelet[2207]: E1002 19:39:22.303408 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.404436 kubelet[2207]: E1002 19:39:22.404271 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.505184 kubelet[2207]: E1002 19:39:22.505102 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.606352 kubelet[2207]: E1002 19:39:22.606299 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.626003 kubelet[2207]: E1002 19:39:22.625927 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:22.707465 kubelet[2207]: E1002 19:39:22.707326 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.808645 kubelet[2207]: E1002 19:39:22.808593 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.873496 kubelet[2207]: E1002 19:39:22.873443 2207 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.27.230" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:22.909949 kubelet[2207]: E1002 19:39:22.909874 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:22.981419 kubelet[2207]: I1002 19:39:22.981277 2207 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.230" Oct 2 19:39:22.983227 kubelet[2207]: E1002 19:39:22.983137 2207 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.230" Oct 2 19:39:22.983605 kubelet[2207]: E1002 19:39:22.983454 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af635346", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.230 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718564166, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 22, 981230818, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af635346" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:22.985304 kubelet[2207]: E1002 19:39:22.985112 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63761a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.230 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718573082, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 22, 981239014, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63761a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:22.986990 kubelet[2207]: E1002 19:39:22.986851 2207 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.230.178a61a1af63a2a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.230", UID:"172.31.27.230", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.230 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.230"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 16, 718584482, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 22, 981243730, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.230.178a61a1af63a2a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:23.010400 kubelet[2207]: E1002 19:39:23.010349 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:23.111495 kubelet[2207]: E1002 19:39:23.111445 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:23.212468 kubelet[2207]: E1002 19:39:23.212420 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:23.313625 kubelet[2207]: E1002 19:39:23.313492 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:23.414673 kubelet[2207]: E1002 19:39:23.414619 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:23.515531 kubelet[2207]: E1002 19:39:23.515459 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:23.616725 kubelet[2207]: E1002 19:39:23.616576 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:23.626901 kubelet[2207]: E1002 19:39:23.626854 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:23.717389 kubelet[2207]: E1002 19:39:23.717323 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:23.818470 kubelet[2207]: E1002 19:39:23.818395 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:23.919486 kubelet[2207]: E1002 19:39:23.919324 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:23.992423 kubelet[2207]: W1002 19:39:23.992358 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:23.992423 kubelet[2207]: E1002 19:39:23.992418 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:24.019891 kubelet[2207]: E1002 19:39:24.019836 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:24.120259 kubelet[2207]: E1002 19:39:24.120196 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:24.220759 kubelet[2207]: E1002 19:39:24.220706 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:24.321670 kubelet[2207]: E1002 19:39:24.321620 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:24.422538 kubelet[2207]: E1002 19:39:24.422465 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:24.523810 kubelet[2207]: E1002 19:39:24.523258 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:24.623834 kubelet[2207]: E1002 19:39:24.623761 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:24.628127 kubelet[2207]: E1002 19:39:24.628076 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:24.724529 kubelet[2207]: E1002 19:39:24.724479 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:24.826200 kubelet[2207]: E1002 19:39:24.825587 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:24.872618 kubelet[2207]: W1002 19:39:24.872574 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:24.872829 kubelet[2207]: E1002 19:39:24.872806 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:24.922891 kubelet[2207]: W1002 19:39:24.922820 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:24.922891 kubelet[2207]: E1002 19:39:24.922886 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:24.927055 kubelet[2207]: E1002 19:39:24.927001 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:25.027885 kubelet[2207]: E1002 19:39:25.027835 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:25.129323 kubelet[2207]: E1002 19:39:25.128631 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:25.230389 kubelet[2207]: E1002 19:39:25.230318 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:25.331076 kubelet[2207]: E1002 19:39:25.331016 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:25.432203 kubelet[2207]: E1002 19:39:25.431676 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:25.532026 kubelet[2207]: E1002 19:39:25.531957 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:25.628967 kubelet[2207]: E1002 19:39:25.628903 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:25.632160 kubelet[2207]: E1002 19:39:25.632103 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:25.732524 kubelet[2207]: E1002 19:39:25.732459 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:25.833454 kubelet[2207]: E1002 19:39:25.833396 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:25.934107 kubelet[2207]: E1002 19:39:25.934050 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:26.035254 kubelet[2207]: E1002 19:39:26.034713 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:26.135526 kubelet[2207]: E1002 19:39:26.135456 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:26.236376 kubelet[2207]: E1002 19:39:26.236307 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:26.337634 kubelet[2207]: E1002 19:39:26.337129 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:26.437928 kubelet[2207]: E1002 19:39:26.437867 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:26.538601 kubelet[2207]: E1002 19:39:26.538536 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:26.579693 kubelet[2207]: W1002 19:39:26.579629 2207 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.27.230" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:26.579693 kubelet[2207]: E1002 19:39:26.579690 2207 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.27.230" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:26.583700 kubelet[2207]: I1002 19:39:26.583646 2207 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:39:26.630042 kubelet[2207]: E1002 19:39:26.629454 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:26.638706 kubelet[2207]: E1002 19:39:26.638658 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:26.739333 kubelet[2207]: E1002 19:39:26.739275 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:26.780571 kubelet[2207]: E1002 19:39:26.780496 2207 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.27.230\" not found" Oct 2 19:39:26.782048 kubelet[2207]: E1002 19:39:26.781998 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:26.839587 kubelet[2207]: E1002 19:39:26.839538 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:26.940429 kubelet[2207]: E1002 19:39:26.940332 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:27.023330 kubelet[2207]: E1002 19:39:27.023271 2207 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.27.230" not found Oct 2 19:39:27.041250 kubelet[2207]: E1002 19:39:27.041189 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:27.141960 kubelet[2207]: E1002 19:39:27.141907 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:27.243979 kubelet[2207]: E1002 19:39:27.243396 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:27.344271 kubelet[2207]: E1002 19:39:27.344208 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:27.445457 kubelet[2207]: E1002 19:39:27.445405 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:27.546735 kubelet[2207]: E1002 19:39:27.546221 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:27.630766 kubelet[2207]: E1002 19:39:27.630664 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:27.647170 kubelet[2207]: E1002 19:39:27.647052 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:27.748003 kubelet[2207]: E1002 19:39:27.747938 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:27.849043 kubelet[2207]: E1002 19:39:27.848484 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:27.949296 kubelet[2207]: E1002 19:39:27.949234 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:28.050275 kubelet[2207]: E1002 19:39:28.050211 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:28.150864 kubelet[2207]: E1002 19:39:28.150358 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:28.181973 kubelet[2207]: E1002 19:39:28.181920 2207 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.27.230" not found Oct 2 19:39:28.251453 kubelet[2207]: E1002 19:39:28.251385 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:28.352528 kubelet[2207]: E1002 19:39:28.352466 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:28.453351 kubelet[2207]: E1002 19:39:28.453287 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:28.554225 kubelet[2207]: E1002 19:39:28.554133 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:28.631734 kubelet[2207]: E1002 19:39:28.631682 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:28.655353 kubelet[2207]: E1002 19:39:28.655306 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:28.757016 kubelet[2207]: E1002 19:39:28.756411 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:28.858293 kubelet[2207]: E1002 19:39:28.858213 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:28.958778 kubelet[2207]: E1002 19:39:28.958699 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:29.059698 kubelet[2207]: E1002 19:39:29.059197 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:29.160344 kubelet[2207]: E1002 19:39:29.160268 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:29.260957 kubelet[2207]: E1002 19:39:29.260884 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:29.281048 kubelet[2207]: E1002 19:39:29.280970 2207 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.27.230\" not found" node="172.31.27.230" Oct 2 19:39:29.362029 kubelet[2207]: E1002 19:39:29.361503 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:29.385244 kubelet[2207]: I1002 19:39:29.385099 2207 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.230" Oct 2 19:39:29.462658 kubelet[2207]: E1002 19:39:29.462603 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:29.563361 kubelet[2207]: E1002 19:39:29.563302 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:29.587352 kubelet[2207]: I1002 19:39:29.587294 2207 kubelet_node_status.go:73] "Successfully registered node" node="172.31.27.230" Oct 2 19:39:29.633654 kubelet[2207]: E1002 19:39:29.633056 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:29.663878 kubelet[2207]: E1002 19:39:29.663831 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:29.764614 kubelet[2207]: E1002 19:39:29.764559 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:29.865091 kubelet[2207]: E1002 19:39:29.865010 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:29.888605 kernel: kauditd_printk_skb: 47 callbacks suppressed Oct 2 19:39:29.888721 kernel: audit: type=1106 audit(1696275569.883:639): pid=2004 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:29.883000 audit[2004]: USER_END pid=2004 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:29.884212 sudo[2004]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:29.885000 audit[2004]: CRED_DISP pid=2004 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:29.904558 kernel: audit: type=1104 audit(1696275569.885:640): pid=2004 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:29.920497 sshd[2001]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:29.921000 audit[2001]: USER_END pid=2001 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:29.924936 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:39:29.926271 systemd[1]: sshd@6-172.31.27.230:22-139.178.89.65:44214.service: Deactivated successfully. Oct 2 19:39:29.936367 systemd-logind[1734]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:39:29.921000 audit[2001]: CRED_DISP pid=2001 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:29.939587 kernel: audit: type=1106 audit(1696275569.921:641): pid=2001 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:29.938758 systemd-logind[1734]: Removed session 7. Oct 2 19:39:29.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.27.230:22-139.178.89.65:44214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:29.958170 kernel: audit: type=1104 audit(1696275569.921:642): pid=2001 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:39:29.958241 kernel: audit: type=1131 audit(1696275569.922:643): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.27.230:22-139.178.89.65:44214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:29.965599 kubelet[2207]: E1002 19:39:29.965555 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:30.066453 kubelet[2207]: E1002 19:39:30.066402 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:30.167910 kubelet[2207]: E1002 19:39:30.166966 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:30.267585 kubelet[2207]: E1002 19:39:30.267537 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:30.368260 kubelet[2207]: E1002 19:39:30.368202 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:30.468694 kubelet[2207]: E1002 19:39:30.468648 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:30.569500 kubelet[2207]: E1002 19:39:30.569456 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:30.634181 kubelet[2207]: E1002 19:39:30.634092 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:30.670122 kubelet[2207]: E1002 19:39:30.670061 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:30.771598 kubelet[2207]: E1002 19:39:30.770920 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:30.871997 kubelet[2207]: E1002 19:39:30.871935 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:30.972898 kubelet[2207]: E1002 19:39:30.972837 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:31.074414 kubelet[2207]: E1002 19:39:31.073819 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:31.175060 kubelet[2207]: E1002 19:39:31.174959 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:31.276071 kubelet[2207]: E1002 19:39:31.276009 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:31.361013 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:39:31.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:31.371302 kernel: audit: type=1131 audit(1696275571.360:644): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:31.376568 kubelet[2207]: E1002 19:39:31.376507 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:31.389000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:39:31.389000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:39:31.395791 kernel: audit: type=1334 audit(1696275571.389:645): prog-id=68 op=UNLOAD Oct 2 19:39:31.395955 kernel: audit: type=1334 audit(1696275571.389:646): prog-id=67 op=UNLOAD Oct 2 19:39:31.396015 kernel: audit: type=1334 audit(1696275571.389:647): prog-id=66 op=UNLOAD Oct 2 19:39:31.389000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:39:31.476723 kubelet[2207]: E1002 19:39:31.476644 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:31.577432 kubelet[2207]: E1002 19:39:31.577346 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:31.635418 kubelet[2207]: E1002 19:39:31.635257 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:31.678321 kubelet[2207]: E1002 19:39:31.678243 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:31.779274 kubelet[2207]: E1002 19:39:31.779201 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:31.783541 kubelet[2207]: E1002 19:39:31.783489 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:31.880166 kubelet[2207]: E1002 19:39:31.880096 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:31.981120 kubelet[2207]: E1002 19:39:31.981056 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:32.082025 kubelet[2207]: E1002 19:39:32.081952 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:32.182659 kubelet[2207]: E1002 19:39:32.182590 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:32.283576 kubelet[2207]: E1002 19:39:32.283425 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:32.384294 kubelet[2207]: E1002 19:39:32.384235 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:32.484974 kubelet[2207]: E1002 19:39:32.484918 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:32.585800 kubelet[2207]: E1002 19:39:32.585656 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:32.635896 kubelet[2207]: E1002 19:39:32.635813 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:32.686527 kubelet[2207]: E1002 19:39:32.686470 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:32.787132 kubelet[2207]: E1002 19:39:32.787080 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:32.888216 kubelet[2207]: E1002 19:39:32.888055 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:32.988973 kubelet[2207]: E1002 19:39:32.988898 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:33.090077 kubelet[2207]: E1002 19:39:33.090014 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:33.191209 kubelet[2207]: E1002 19:39:33.191082 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:33.291897 kubelet[2207]: E1002 19:39:33.291851 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:33.393068 kubelet[2207]: E1002 19:39:33.392996 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:33.494236 kubelet[2207]: E1002 19:39:33.494050 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:33.594489 kubelet[2207]: E1002 19:39:33.594419 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:33.636925 kubelet[2207]: E1002 19:39:33.636863 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:33.694569 kubelet[2207]: E1002 19:39:33.694521 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:33.795634 kubelet[2207]: E1002 19:39:33.795489 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:33.896173 kubelet[2207]: E1002 19:39:33.896115 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:33.996915 kubelet[2207]: E1002 19:39:33.996849 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:34.097938 kubelet[2207]: E1002 19:39:34.097813 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:34.198268 kubelet[2207]: E1002 19:39:34.198221 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:34.299265 kubelet[2207]: E1002 19:39:34.299214 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:34.400380 kubelet[2207]: E1002 19:39:34.400243 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:34.501369 kubelet[2207]: E1002 19:39:34.501320 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:34.602661 kubelet[2207]: E1002 19:39:34.602612 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:34.637401 kubelet[2207]: E1002 19:39:34.637353 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:34.703773 kubelet[2207]: E1002 19:39:34.703724 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:34.804793 kubelet[2207]: E1002 19:39:34.804745 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:34.905883 kubelet[2207]: E1002 19:39:34.905834 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:35.007044 kubelet[2207]: E1002 19:39:35.006909 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:35.107468 kubelet[2207]: E1002 19:39:35.107417 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:35.208194 kubelet[2207]: E1002 19:39:35.208109 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:35.309166 kubelet[2207]: E1002 19:39:35.309007 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:35.409914 kubelet[2207]: E1002 19:39:35.409870 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:35.511181 kubelet[2207]: E1002 19:39:35.511091 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:35.612281 kubelet[2207]: E1002 19:39:35.612108 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:35.638711 kubelet[2207]: E1002 19:39:35.638664 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:35.712613 kubelet[2207]: E1002 19:39:35.712561 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:35.813745 kubelet[2207]: E1002 19:39:35.813696 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:35.914752 kubelet[2207]: E1002 19:39:35.914598 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:36.015591 kubelet[2207]: E1002 19:39:36.015540 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:36.116899 kubelet[2207]: E1002 19:39:36.116828 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:36.217549 kubelet[2207]: E1002 19:39:36.217500 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:36.318379 kubelet[2207]: E1002 19:39:36.318327 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:36.419293 kubelet[2207]: E1002 19:39:36.419219 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:36.520284 kubelet[2207]: E1002 19:39:36.520118 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:36.608684 kubelet[2207]: E1002 19:39:36.608636 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:36.620927 kubelet[2207]: E1002 19:39:36.620857 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:36.640491 kubelet[2207]: E1002 19:39:36.640440 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:36.721313 kubelet[2207]: E1002 19:39:36.721245 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:36.780823 kubelet[2207]: E1002 19:39:36.780674 2207 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.27.230\" not found" Oct 2 19:39:36.784666 kubelet[2207]: E1002 19:39:36.784602 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:36.821997 kubelet[2207]: E1002 19:39:36.821917 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:36.922836 kubelet[2207]: E1002 19:39:36.922767 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:37.023596 kubelet[2207]: E1002 19:39:37.023527 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:37.124080 kubelet[2207]: E1002 19:39:37.123901 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:37.225179 kubelet[2207]: E1002 19:39:37.225091 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:37.325983 kubelet[2207]: E1002 19:39:37.325920 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:37.426777 kubelet[2207]: E1002 19:39:37.426631 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:37.527599 kubelet[2207]: E1002 19:39:37.527530 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:37.627973 kubelet[2207]: E1002 19:39:37.627909 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:37.642303 kubelet[2207]: E1002 19:39:37.642229 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:37.729045 kubelet[2207]: E1002 19:39:37.728973 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:37.830206 kubelet[2207]: E1002 19:39:37.830067 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:37.930949 kubelet[2207]: E1002 19:39:37.930857 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:38.031826 kubelet[2207]: E1002 19:39:38.031667 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:38.131934 kubelet[2207]: E1002 19:39:38.131864 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:38.232874 kubelet[2207]: E1002 19:39:38.232814 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:38.333926 kubelet[2207]: E1002 19:39:38.333771 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:38.434755 kubelet[2207]: E1002 19:39:38.434696 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:38.535617 kubelet[2207]: E1002 19:39:38.535547 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:38.636061 kubelet[2207]: E1002 19:39:38.635919 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:38.643312 kubelet[2207]: E1002 19:39:38.643237 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:38.737083 kubelet[2207]: E1002 19:39:38.736988 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:38.838223 kubelet[2207]: E1002 19:39:38.838134 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:38.939070 kubelet[2207]: E1002 19:39:38.939021 2207 kubelet.go:2448] "Error getting node" err="node \"172.31.27.230\" not found" Oct 2 19:39:39.040103 kubelet[2207]: I1002 19:39:39.040051 2207 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:39:39.041269 env[1750]: time="2023-10-02T19:39:39.041177498Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:39:39.042531 kubelet[2207]: I1002 19:39:39.042489 2207 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:39:39.043629 kubelet[2207]: E1002 19:39:39.043582 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:39.626371 kubelet[2207]: I1002 19:39:39.626298 2207 apiserver.go:52] "Watching apiserver" Oct 2 19:39:39.630785 kubelet[2207]: I1002 19:39:39.630724 2207 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:39:39.631002 kubelet[2207]: I1002 19:39:39.630911 2207 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:39:39.643930 kubelet[2207]: E1002 19:39:39.643886 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:39.646569 systemd[1]: Created slice kubepods-besteffort-pod0a65ebbe_53d4_406b_8c1f_73d71d041995.slice. Oct 2 19:39:39.662664 systemd[1]: Created slice kubepods-burstable-pod559c54e9_1f43_4791_8970_e50e73f5dcab.slice. Oct 2 19:39:39.727330 kubelet[2207]: I1002 19:39:39.727276 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a65ebbe-53d4-406b-8c1f-73d71d041995-kube-proxy\") pod \"kube-proxy-dvx64\" (UID: \"0a65ebbe-53d4-406b-8c1f-73d71d041995\") " pod="kube-system/kube-proxy-dvx64" Oct 2 19:39:39.727554 kubelet[2207]: I1002 19:39:39.727354 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cni-path\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.727554 kubelet[2207]: I1002 19:39:39.727406 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-xtables-lock\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.727554 kubelet[2207]: I1002 19:39:39.727468 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-etc-cni-netd\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.727554 kubelet[2207]: I1002 19:39:39.727517 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-host-proc-sys-net\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.727788 kubelet[2207]: I1002 19:39:39.727563 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggn8l\" (UniqueName: \"kubernetes.io/projected/559c54e9-1f43-4791-8970-e50e73f5dcab-kube-api-access-ggn8l\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.727788 kubelet[2207]: I1002 19:39:39.727606 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a65ebbe-53d4-406b-8c1f-73d71d041995-xtables-lock\") pod \"kube-proxy-dvx64\" (UID: \"0a65ebbe-53d4-406b-8c1f-73d71d041995\") " pod="kube-system/kube-proxy-dvx64" Oct 2 19:39:39.727788 kubelet[2207]: I1002 19:39:39.727650 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a65ebbe-53d4-406b-8c1f-73d71d041995-lib-modules\") pod \"kube-proxy-dvx64\" (UID: \"0a65ebbe-53d4-406b-8c1f-73d71d041995\") " pod="kube-system/kube-proxy-dvx64" Oct 2 19:39:39.727788 kubelet[2207]: I1002 19:39:39.727691 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-cgroup\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.727788 kubelet[2207]: I1002 19:39:39.727744 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-lib-modules\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.728113 kubelet[2207]: I1002 19:39:39.727794 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/559c54e9-1f43-4791-8970-e50e73f5dcab-clustermesh-secrets\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.728113 kubelet[2207]: I1002 19:39:39.727837 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/559c54e9-1f43-4791-8970-e50e73f5dcab-hubble-tls\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.728113 kubelet[2207]: I1002 19:39:39.727886 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd9fc\" (UniqueName: \"kubernetes.io/projected/0a65ebbe-53d4-406b-8c1f-73d71d041995-kube-api-access-dd9fc\") pod \"kube-proxy-dvx64\" (UID: \"0a65ebbe-53d4-406b-8c1f-73d71d041995\") " pod="kube-system/kube-proxy-dvx64" Oct 2 19:39:39.728113 kubelet[2207]: I1002 19:39:39.727939 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-run\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.728113 kubelet[2207]: I1002 19:39:39.727984 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-bpf-maps\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.728113 kubelet[2207]: I1002 19:39:39.728045 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-hostproc\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.728526 kubelet[2207]: I1002 19:39:39.728104 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-config-path\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.728526 kubelet[2207]: I1002 19:39:39.728201 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-host-proc-sys-kernel\") pod \"cilium-7shww\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " pod="kube-system/cilium-7shww" Oct 2 19:39:39.728526 kubelet[2207]: I1002 19:39:39.728223 2207 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:39:39.960394 env[1750]: time="2023-10-02T19:39:39.960295641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvx64,Uid:0a65ebbe-53d4-406b-8c1f-73d71d041995,Namespace:kube-system,Attempt:0,}" Oct 2 19:39:40.274796 env[1750]: time="2023-10-02T19:39:40.274252343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7shww,Uid:559c54e9-1f43-4791-8970-e50e73f5dcab,Namespace:kube-system,Attempt:0,}" Oct 2 19:39:40.538522 env[1750]: time="2023-10-02T19:39:40.538041059Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:40.540637 env[1750]: time="2023-10-02T19:39:40.540577688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:40.545687 env[1750]: time="2023-10-02T19:39:40.545596236Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:40.547942 env[1750]: time="2023-10-02T19:39:40.547879665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:40.552966 env[1750]: time="2023-10-02T19:39:40.552887989Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:40.558587 env[1750]: time="2023-10-02T19:39:40.558525305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:40.560828 env[1750]: time="2023-10-02T19:39:40.560761609Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:40.563069 env[1750]: time="2023-10-02T19:39:40.562971922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:40.617035 env[1750]: time="2023-10-02T19:39:40.616878145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:39:40.617285 env[1750]: time="2023-10-02T19:39:40.617052157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:39:40.617285 env[1750]: time="2023-10-02T19:39:40.617182273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:39:40.618134 env[1750]: time="2023-10-02T19:39:40.617927316Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61223c640620063cd8e2fc12b6c63d907190652e6c62bd6a8c70f2aed427378c pid=2305 runtime=io.containerd.runc.v2 Oct 2 19:39:40.622820 env[1750]: time="2023-10-02T19:39:40.622655345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:39:40.622820 env[1750]: time="2023-10-02T19:39:40.622738169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:39:40.623240 env[1750]: time="2023-10-02T19:39:40.623086720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:39:40.624037 env[1750]: time="2023-10-02T19:39:40.623898135Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2 pid=2310 runtime=io.containerd.runc.v2 Oct 2 19:39:40.649535 kubelet[2207]: E1002 19:39:40.649479 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:40.661452 systemd[1]: Started cri-containerd-61223c640620063cd8e2fc12b6c63d907190652e6c62bd6a8c70f2aed427378c.scope. Oct 2 19:39:40.676831 systemd[1]: Started cri-containerd-e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2.scope. Oct 2 19:39:40.727000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.727000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.745928 kernel: audit: type=1400 audit(1696275580.727:648): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.746197 kernel: audit: type=1400 audit(1696275580.727:649): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.746277 kernel: audit: type=1400 audit(1696275580.727:650): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.727000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.755944 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:39:40.756120 kernel: audit: type=1400 audit(1696275580.727:651): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.727000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.765979 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:39:40.766165 kernel: audit: type=1400 audit(1696275580.727:652): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.727000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.775175 kernel: audit: backlog limit exceeded Oct 2 19:39:40.775316 kernel: audit: type=1400 audit(1696275580.727:653): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.727000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.785186 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:39:40.727000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.727000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.727000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.735000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.735000 audit: BPF prog-id=76 op=LOAD Oct 2 19:39:40.735000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.735000 audit[2325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2305 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:40.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631323233633634303632303036336364386532666331326236633633 Oct 2 19:39:40.735000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.735000 audit[2325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2305 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:40.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631323233633634303632303036336364386532666331326236633633 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit: BPF prog-id=77 op=LOAD Oct 2 19:39:40.736000 audit[2325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2305 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:40.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631323233633634303632303036336364386532666331326236633633 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit: BPF prog-id=78 op=LOAD Oct 2 19:39:40.736000 audit[2325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2305 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:40.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631323233633634303632303036336364386532666331326236633633 Oct 2 19:39:40.736000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:39:40.736000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { perfmon } for pid=2325 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit[2325]: AVC avc: denied { bpf } for pid=2325 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.736000 audit: BPF prog-id=79 op=LOAD Oct 2 19:39:40.736000 audit[2325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2305 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:40.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631323233633634303632303036336364386532666331326236633633 Oct 2 19:39:40.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001c5b38 a2=10 a3=0 items=0 ppid=2310 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536343139613966333534333665303338303333393832353534336265 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001c55a0 a2=3c a3=0 items=0 ppid=2310 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536343139613966333534333665303338303333393832353534336265 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.790000 audit: BPF prog-id=81 op=LOAD Oct 2 19:39:40.790000 audit[2333]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001c58e0 a2=78 a3=0 items=0 ppid=2310 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536343139613966333534333665303338303333393832353534336265 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit: BPF prog-id=82 op=LOAD Oct 2 19:39:40.791000 audit[2333]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001c5670 a2=78 a3=0 items=0 ppid=2310 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:40.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536343139613966333534333665303338303333393832353534336265 Oct 2 19:39:40.791000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:39:40.791000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { perfmon } for pid=2333 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit[2333]: AVC avc: denied { bpf } for pid=2333 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:40.791000 audit: BPF prog-id=83 op=LOAD Oct 2 19:39:40.791000 audit[2333]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001c5b40 a2=78 a3=0 items=0 ppid=2310 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:40.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536343139613966333534333665303338303333393832353534336265 Oct 2 19:39:40.830263 env[1750]: time="2023-10-02T19:39:40.830202558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvx64,Uid:0a65ebbe-53d4-406b-8c1f-73d71d041995,Namespace:kube-system,Attempt:0,} returns sandbox id \"61223c640620063cd8e2fc12b6c63d907190652e6c62bd6a8c70f2aed427378c\"" Oct 2 19:39:40.834759 env[1750]: time="2023-10-02T19:39:40.834705179Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:39:40.851744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4178092233.mount: Deactivated successfully. Oct 2 19:39:40.859765 env[1750]: time="2023-10-02T19:39:40.859702990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7shww,Uid:559c54e9-1f43-4791-8970-e50e73f5dcab,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\"" Oct 2 19:39:41.650373 kubelet[2207]: E1002 19:39:41.650292 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:41.786039 kubelet[2207]: E1002 19:39:41.785907 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:42.244785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569590270.mount: Deactivated successfully. Oct 2 19:39:42.651594 kubelet[2207]: E1002 19:39:42.651419 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:42.900579 env[1750]: time="2023-10-02T19:39:42.900510394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:42.904408 env[1750]: time="2023-10-02T19:39:42.904255013Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:42.907902 env[1750]: time="2023-10-02T19:39:42.907803817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:42.911337 env[1750]: time="2023-10-02T19:39:42.911257052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:42.912741 env[1750]: time="2023-10-02T19:39:42.912677850Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e\"" Oct 2 19:39:42.915016 env[1750]: time="2023-10-02T19:39:42.914950227Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:39:42.920213 env[1750]: time="2023-10-02T19:39:42.920107472Z" level=info msg="CreateContainer within sandbox \"61223c640620063cd8e2fc12b6c63d907190652e6c62bd6a8c70f2aed427378c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:39:42.948992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73601217.mount: Deactivated successfully. Oct 2 19:39:42.961410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246842347.mount: Deactivated successfully. Oct 2 19:39:42.967611 env[1750]: time="2023-10-02T19:39:42.967501902Z" level=info msg="CreateContainer within sandbox \"61223c640620063cd8e2fc12b6c63d907190652e6c62bd6a8c70f2aed427378c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d207f36f9674c42bcf099a3dc5d9e2205c01044222b7d72e4acfb93ebb58a0e\"" Oct 2 19:39:42.969248 env[1750]: time="2023-10-02T19:39:42.969177688Z" level=info msg="StartContainer for \"8d207f36f9674c42bcf099a3dc5d9e2205c01044222b7d72e4acfb93ebb58a0e\"" Oct 2 19:39:43.018944 systemd[1]: Started cri-containerd-8d207f36f9674c42bcf099a3dc5d9e2205c01044222b7d72e4acfb93ebb58a0e.scope. Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=400014d5a0 a2=3c a3=0 items=0 ppid=2305 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323037663336663936373463343262636630393961336463356439 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit: BPF prog-id=84 op=LOAD Oct 2 19:39:43.074000 audit[2385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400014d8e0 a2=78 a3=0 items=0 ppid=2305 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323037663336663936373463343262636630393961336463356439 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.074000 audit: BPF prog-id=85 op=LOAD Oct 2 19:39:43.074000 audit[2385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400014d670 a2=78 a3=0 items=0 ppid=2305 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323037663336663936373463343262636630393961336463356439 Oct 2 19:39:43.074000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:39:43.074000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:39:43.075000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.075000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.075000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.075000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.075000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.075000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.075000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.075000 audit[2385]: AVC avc: denied { perfmon } for pid=2385 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.075000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.075000 audit[2385]: AVC avc: denied { bpf } for pid=2385 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:43.075000 audit: BPF prog-id=86 op=LOAD Oct 2 19:39:43.075000 audit[2385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400014db40 a2=78 a3=0 items=0 ppid=2305 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864323037663336663936373463343262636630393961336463356439 Oct 2 19:39:43.124613 env[1750]: time="2023-10-02T19:39:43.124527670Z" level=info msg="StartContainer for \"8d207f36f9674c42bcf099a3dc5d9e2205c01044222b7d72e4acfb93ebb58a0e\" returns successfully" Oct 2 19:39:43.225484 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:39:43.225668 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:39:43.227234 kernel: IPVS: ipvs loaded. Oct 2 19:39:43.253193 kernel: IPVS: [rr] scheduler registered. Oct 2 19:39:43.271198 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:39:43.283210 kernel: IPVS: [sh] scheduler registered. Oct 2 19:39:43.402000 audit[2444]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.402000 audit[2444]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd02a9dd0 a2=0 a3=ffffa28ee6c0 items=0 ppid=2396 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:39:43.409000 audit[2445]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.409000 audit[2445]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc90f26e0 a2=0 a3=ffff8f5726c0 items=0 ppid=2396 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:39:43.411000 audit[2447]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.411000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8977fe0 a2=0 a3=ffffa6b3d6c0 items=0 ppid=2396 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:39:43.414000 audit[2448]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.414000 audit[2448]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffeff6480 a2=0 a3=ffffbeecf6c0 items=0 ppid=2396 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:39:43.418000 audit[2449]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.418000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe07a21f0 a2=0 a3=ffffbcc1d6c0 items=0 ppid=2396 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:39:43.419000 audit[2450]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.419000 audit[2450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffecdbf880 a2=0 a3=ffff8bf3a6c0 items=0 ppid=2396 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:39:43.510000 audit[2451]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.510000 audit[2451]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff97cade0 a2=0 a3=ffff942706c0 items=0 ppid=2396 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:39:43.521000 audit[2453]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.521000 audit[2453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd751e5c0 a2=0 a3=ffff895d46c0 items=0 ppid=2396 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:39:43.536000 audit[2456]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.536000 audit[2456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff1beb320 a2=0 a3=ffffbd6186c0 items=0 ppid=2396 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.536000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:39:43.540000 audit[2457]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.540000 audit[2457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca19e900 a2=0 a3=ffffbf52c6c0 items=0 ppid=2396 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.540000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:39:43.549000 audit[2459]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.549000 audit[2459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff20240a0 a2=0 a3=ffff8bca46c0 items=0 ppid=2396 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:39:43.554000 audit[2460]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.554000 audit[2460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9599fc0 a2=0 a3=ffff8e6146c0 items=0 ppid=2396 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:39:43.564000 audit[2462]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.564000 audit[2462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdf313760 a2=0 a3=ffffa293f6c0 items=0 ppid=2396 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:39:43.579000 audit[2465]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.579000 audit[2465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc59af790 a2=0 a3=ffffb45c86c0 items=0 ppid=2396 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:39:43.583000 audit[2466]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.583000 audit[2466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1fae750 a2=0 a3=ffffa127a6c0 items=0 ppid=2396 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:39:43.593000 audit[2468]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.593000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc49c4c50 a2=0 a3=ffffbb5016c0 items=0 ppid=2396 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:39:43.598000 audit[2469]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.598000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe4c96d50 a2=0 a3=ffffb790f6c0 items=0 ppid=2396 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:39:43.608000 audit[2471]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.608000 audit[2471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdef3d880 a2=0 a3=ffff8f38a6c0 items=0 ppid=2396 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.608000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:39:43.623000 audit[2474]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.623000 audit[2474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffef9ceb50 a2=0 a3=ffff8a7506c0 items=0 ppid=2396 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:39:43.641000 audit[2477]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.641000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcff11300 a2=0 a3=ffffaf93e6c0 items=0 ppid=2396 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.641000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:39:43.645000 audit[2478]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.645000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff8c926a0 a2=0 a3=ffffa79776c0 items=0 ppid=2396 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:39:43.652623 kubelet[2207]: E1002 19:39:43.652556 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:43.654000 audit[2480]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.654000 audit[2480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffed6c5220 a2=0 a3=ffffa032d6c0 items=0 ppid=2396 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:43.669000 audit[2483]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:43.669000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff4ecd160 a2=0 a3=ffff9ed856c0 items=0 ppid=2396 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:43.696000 audit[2487]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:39:43.696000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffc2270720 a2=0 a3=ffffb9c416c0 items=0 ppid=2396 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.696000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:43.713000 audit[2487]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:39:43.713000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffc2270720 a2=0 a3=ffffb9c416c0 items=0 ppid=2396 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:43.724000 audit[2491]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.724000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffd847ee0 a2=0 a3=ffffa25076c0 items=0 ppid=2396 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.724000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:39:43.733000 audit[2493]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.733000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc3c41920 a2=0 a3=ffff934e26c0 items=0 ppid=2396 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.733000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:39:43.751000 audit[2496]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.751000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc094a580 a2=0 a3=ffff89fcb6c0 items=0 ppid=2396 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:39:43.758000 audit[2497]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.758000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0c45d40 a2=0 a3=ffffa3e6f6c0 items=0 ppid=2396 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:39:43.769000 audit[2499]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.769000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffebf6fca0 a2=0 a3=ffff8bb906c0 items=0 ppid=2396 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:39:43.777000 audit[2500]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.777000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd835e3a0 a2=0 a3=ffff99e7d6c0 items=0 ppid=2396 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:39:43.788000 audit[2502]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.788000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd8a91080 a2=0 a3=ffff9ee726c0 items=0 ppid=2396 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:39:43.802000 audit[2505]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.802000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffffbf5370 a2=0 a3=ffffa90d06c0 items=0 ppid=2396 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:39:43.809000 audit[2506]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.809000 audit[2506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5b2bdd0 a2=0 a3=ffffaf5aa6c0 items=0 ppid=2396 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:39:43.819000 audit[2508]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.819000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc080c8b0 a2=0 a3=ffffbb8ea6c0 items=0 ppid=2396 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:39:43.825000 audit[2509]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.825000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdfcddf00 a2=0 a3=ffffac7a76c0 items=0 ppid=2396 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:39:43.836000 audit[2511]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.836000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe0f60840 a2=0 a3=ffff874ee6c0 items=0 ppid=2396 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:39:43.851000 audit[2514]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.851000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff4ce6eb0 a2=0 a3=ffffb85a66c0 items=0 ppid=2396 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:39:43.865000 audit[2517]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.865000 audit[2517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec61d100 a2=0 a3=ffff8d2716c0 items=0 ppid=2396 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:39:43.869000 audit[2518]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.869000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffffc96c90 a2=0 a3=ffff88a946c0 items=0 ppid=2396 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:39:43.877000 audit[2520]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.877000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc9e58460 a2=0 a3=ffffa2f6b6c0 items=0 ppid=2396 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:43.892000 audit[2523]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:43.892000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd234bba0 a2=0 a3=ffff80a966c0 items=0 ppid=2396 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.892000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:39:43.915000 audit[2527]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:39:43.915000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffc7d69b40 a2=0 a3=ffff8bce96c0 items=0 ppid=2396 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.915000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:43.916000 audit[2527]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:39:43.916000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1860 a0=3 a1=ffffc7d69b40 a2=0 a3=ffff8bce96c0 items=0 ppid=2396 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:43.916000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:39:44.653101 kubelet[2207]: E1002 19:39:44.653028 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:45.654339 kubelet[2207]: E1002 19:39:45.654267 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:45.962335 update_engine[1735]: I1002 19:39:45.962247 1735 update_attempter.cc:505] Updating boot flags... Oct 2 19:39:46.655473 kubelet[2207]: E1002 19:39:46.655405 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:46.786725 kubelet[2207]: E1002 19:39:46.786683 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:47.656104 kubelet[2207]: E1002 19:39:47.656030 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:48.657002 kubelet[2207]: E1002 19:39:48.656914 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:49.657505 kubelet[2207]: E1002 19:39:49.657424 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:50.491551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount418828941.mount: Deactivated successfully. Oct 2 19:39:50.658045 kubelet[2207]: E1002 19:39:50.657878 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:51.658885 kubelet[2207]: E1002 19:39:51.658784 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:51.787832 kubelet[2207]: E1002 19:39:51.787693 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:52.659805 kubelet[2207]: E1002 19:39:52.659710 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:53.659993 kubelet[2207]: E1002 19:39:53.659879 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:54.500580 env[1750]: time="2023-10-02T19:39:54.500491825Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:54.503704 env[1750]: time="2023-10-02T19:39:54.503638307Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:54.506689 env[1750]: time="2023-10-02T19:39:54.506637849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:54.507787 env[1750]: time="2023-10-02T19:39:54.507724364Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db\"" Oct 2 19:39:54.511845 env[1750]: time="2023-10-02T19:39:54.511770870Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:39:54.528600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355518925.mount: Deactivated successfully. Oct 2 19:39:54.538120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490642008.mount: Deactivated successfully. Oct 2 19:39:54.544672 env[1750]: time="2023-10-02T19:39:54.544611622Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\"" Oct 2 19:39:54.545965 env[1750]: time="2023-10-02T19:39:54.545920485Z" level=info msg="StartContainer for \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\"" Oct 2 19:39:54.598993 systemd[1]: Started cri-containerd-a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f.scope. Oct 2 19:39:54.636491 systemd[1]: cri-containerd-a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f.scope: Deactivated successfully. Oct 2 19:39:54.661083 kubelet[2207]: E1002 19:39:54.661005 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:55.523346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f-rootfs.mount: Deactivated successfully. Oct 2 19:39:55.662090 kubelet[2207]: E1002 19:39:55.662044 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:56.140351 env[1750]: time="2023-10-02T19:39:56.140255027Z" level=info msg="shim disconnected" id=a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f Oct 2 19:39:56.140985 env[1750]: time="2023-10-02T19:39:56.140350655Z" level=warning msg="cleaning up after shim disconnected" id=a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f namespace=k8s.io Oct 2 19:39:56.140985 env[1750]: time="2023-10-02T19:39:56.140377607Z" level=info msg="cleaning up dead shim" Oct 2 19:39:56.169092 env[1750]: time="2023-10-02T19:39:56.168987896Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:39:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2649 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:39:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:39:56.169716 env[1750]: time="2023-10-02T19:39:56.169544011Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 19:39:56.170335 env[1750]: time="2023-10-02T19:39:56.170249443Z" level=error msg="Failed to pipe stdout of container \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\"" error="reading from a closed fifo" Oct 2 19:39:56.174397 env[1750]: time="2023-10-02T19:39:56.174308069Z" level=error msg="Failed to pipe stderr of container \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\"" error="reading from a closed fifo" Oct 2 19:39:56.177466 env[1750]: time="2023-10-02T19:39:56.177332703Z" level=error msg="StartContainer for \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:39:56.178677 kubelet[2207]: E1002 19:39:56.178209 2207 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f" Oct 2 19:39:56.178677 kubelet[2207]: E1002 19:39:56.178479 2207 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:39:56.178677 kubelet[2207]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:39:56.178677 kubelet[2207]: rm /hostbin/cilium-mount Oct 2 19:39:56.179089 kubelet[2207]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ggn8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:39:56.179391 kubelet[2207]: E1002 19:39:56.178587 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:39:56.218719 env[1750]: time="2023-10-02T19:39:56.218639909Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:39:56.244793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3395182657.mount: Deactivated successfully. Oct 2 19:39:56.258040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480316704.mount: Deactivated successfully. Oct 2 19:39:56.265292 env[1750]: time="2023-10-02T19:39:56.265225529Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\"" Oct 2 19:39:56.267008 env[1750]: time="2023-10-02T19:39:56.266929300Z" level=info msg="StartContainer for \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\"" Oct 2 19:39:56.314057 systemd[1]: Started cri-containerd-8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f.scope. Oct 2 19:39:56.357542 systemd[1]: cri-containerd-8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f.scope: Deactivated successfully. Oct 2 19:39:56.378469 env[1750]: time="2023-10-02T19:39:56.378384785Z" level=info msg="shim disconnected" id=8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f Oct 2 19:39:56.378469 env[1750]: time="2023-10-02T19:39:56.378470945Z" level=warning msg="cleaning up after shim disconnected" id=8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f namespace=k8s.io Oct 2 19:39:56.378860 env[1750]: time="2023-10-02T19:39:56.378494957Z" level=info msg="cleaning up dead shim" Oct 2 19:39:56.409536 env[1750]: time="2023-10-02T19:39:56.407484193Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:39:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2686 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:39:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:39:56.409536 env[1750]: time="2023-10-02T19:39:56.408064189Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 19:39:56.409536 env[1750]: time="2023-10-02T19:39:56.409269420Z" level=error msg="Failed to pipe stderr of container \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\"" error="reading from a closed fifo" Oct 2 19:39:56.409536 env[1750]: time="2023-10-02T19:39:56.409472664Z" level=error msg="Failed to pipe stdout of container \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\"" error="reading from a closed fifo" Oct 2 19:39:56.412178 env[1750]: time="2023-10-02T19:39:56.412017359Z" level=error msg="StartContainer for \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:39:56.413326 kubelet[2207]: E1002 19:39:56.412564 2207 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f" Oct 2 19:39:56.413326 kubelet[2207]: E1002 19:39:56.412721 2207 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:39:56.413326 kubelet[2207]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:39:56.413326 kubelet[2207]: rm /hostbin/cilium-mount Oct 2 19:39:56.413834 kubelet[2207]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ggn8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:39:56.414192 kubelet[2207]: E1002 19:39:56.412787 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:39:56.608957 kubelet[2207]: E1002 19:39:56.608883 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:56.664891 kubelet[2207]: E1002 19:39:56.663510 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:56.789178 kubelet[2207]: E1002 19:39:56.789107 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:57.218909 kubelet[2207]: I1002 19:39:57.218848 2207 scope.go:115] "RemoveContainer" containerID="a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f" Oct 2 19:39:57.219598 kubelet[2207]: I1002 19:39:57.219539 2207 scope.go:115] "RemoveContainer" containerID="a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f" Oct 2 19:39:57.223908 env[1750]: time="2023-10-02T19:39:57.222454385Z" level=info msg="RemoveContainer for \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\"" Oct 2 19:39:57.225728 env[1750]: time="2023-10-02T19:39:57.225658707Z" level=info msg="RemoveContainer for \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\"" Oct 2 19:39:57.225994 env[1750]: time="2023-10-02T19:39:57.225880587Z" level=error msg="RemoveContainer for \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\" failed" error="failed to set removing state for container \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\": container is already in removing state" Oct 2 19:39:57.226390 kubelet[2207]: E1002 19:39:57.226353 2207 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\": container is already in removing state" containerID="a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f" Oct 2 19:39:57.226674 kubelet[2207]: E1002 19:39:57.226642 2207 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f": container is already in removing state; Skipping pod "cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)" Oct 2 19:39:57.227362 kubelet[2207]: E1002 19:39:57.227319 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:39:57.228841 env[1750]: time="2023-10-02T19:39:57.228732013Z" level=info msg="RemoveContainer for \"a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f\" returns successfully" Oct 2 19:39:57.664739 kubelet[2207]: E1002 19:39:57.663873 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:58.225005 kubelet[2207]: E1002 19:39:58.224951 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:39:58.666390 kubelet[2207]: E1002 19:39:58.665957 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:59.261436 kubelet[2207]: W1002 19:39:59.261367 2207 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559c54e9_1f43_4791_8970_e50e73f5dcab.slice/cri-containerd-a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f.scope WatchSource:0}: container "a3d47c2e6c891d5b2c6ab628c65d243126def25ff02a9d2276c7485771cdf51f" in namespace "k8s.io": not found Oct 2 19:39:59.667644 kubelet[2207]: E1002 19:39:59.667240 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:00.669029 kubelet[2207]: E1002 19:40:00.668960 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:01.669629 kubelet[2207]: E1002 19:40:01.669550 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:01.790320 kubelet[2207]: E1002 19:40:01.790285 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:02.372660 kubelet[2207]: W1002 19:40:02.371794 2207 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559c54e9_1f43_4791_8970_e50e73f5dcab.slice/cri-containerd-8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f.scope WatchSource:0}: task 8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f not found: not found Oct 2 19:40:02.670787 kubelet[2207]: E1002 19:40:02.670354 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:03.671111 kubelet[2207]: E1002 19:40:03.671063 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:04.671967 kubelet[2207]: E1002 19:40:04.671909 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:05.672998 kubelet[2207]: E1002 19:40:05.672912 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:06.674527 kubelet[2207]: E1002 19:40:06.674454 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:06.791273 kubelet[2207]: E1002 19:40:06.791238 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:07.675525 kubelet[2207]: E1002 19:40:07.675478 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:08.676583 kubelet[2207]: E1002 19:40:08.676530 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:09.678018 kubelet[2207]: E1002 19:40:09.677975 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:10.679337 kubelet[2207]: E1002 19:40:10.679268 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:11.679775 kubelet[2207]: E1002 19:40:11.679691 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:11.793599 kubelet[2207]: E1002 19:40:11.793562 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:12.109499 env[1750]: time="2023-10-02T19:40:12.109397039Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:40:12.130496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740999098.mount: Deactivated successfully. Oct 2 19:40:12.143240 env[1750]: time="2023-10-02T19:40:12.143133748Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\"" Oct 2 19:40:12.144365 env[1750]: time="2023-10-02T19:40:12.144293728Z" level=info msg="StartContainer for \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\"" Oct 2 19:40:12.196472 systemd[1]: Started cri-containerd-d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626.scope. Oct 2 19:40:12.238383 systemd[1]: cri-containerd-d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626.scope: Deactivated successfully. Oct 2 19:40:12.265534 env[1750]: time="2023-10-02T19:40:12.265440354Z" level=info msg="shim disconnected" id=d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626 Oct 2 19:40:12.265823 env[1750]: time="2023-10-02T19:40:12.265533390Z" level=warning msg="cleaning up after shim disconnected" id=d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626 namespace=k8s.io Oct 2 19:40:12.265823 env[1750]: time="2023-10-02T19:40:12.265563270Z" level=info msg="cleaning up dead shim" Oct 2 19:40:12.292127 env[1750]: time="2023-10-02T19:40:12.292052185Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2725 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:40:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:40:12.292602 env[1750]: time="2023-10-02T19:40:12.292513129Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:40:12.293323 env[1750]: time="2023-10-02T19:40:12.293265840Z" level=error msg="Failed to pipe stdout of container \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\"" error="reading from a closed fifo" Oct 2 19:40:12.293607 env[1750]: time="2023-10-02T19:40:12.293564220Z" level=error msg="Failed to pipe stderr of container \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\"" error="reading from a closed fifo" Oct 2 19:40:12.295901 env[1750]: time="2023-10-02T19:40:12.295839252Z" level=error msg="StartContainer for \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:40:12.297001 kubelet[2207]: E1002 19:40:12.296368 2207 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626" Oct 2 19:40:12.297001 kubelet[2207]: E1002 19:40:12.296510 2207 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:40:12.297001 kubelet[2207]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:40:12.297001 kubelet[2207]: rm /hostbin/cilium-mount Oct 2 19:40:12.297488 kubelet[2207]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ggn8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:40:12.297639 kubelet[2207]: E1002 19:40:12.296569 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:40:12.680505 kubelet[2207]: E1002 19:40:12.680398 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:13.124039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626-rootfs.mount: Deactivated successfully. Oct 2 19:40:13.262962 kubelet[2207]: I1002 19:40:13.262928 2207 scope.go:115] "RemoveContainer" containerID="8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f" Oct 2 19:40:13.264017 kubelet[2207]: I1002 19:40:13.263963 2207 scope.go:115] "RemoveContainer" containerID="8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f" Oct 2 19:40:13.265702 env[1750]: time="2023-10-02T19:40:13.265613051Z" level=info msg="RemoveContainer for \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\"" Oct 2 19:40:13.267259 env[1750]: time="2023-10-02T19:40:13.267206051Z" level=info msg="RemoveContainer for \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\"" Oct 2 19:40:13.267421 env[1750]: time="2023-10-02T19:40:13.267344507Z" level=error msg="RemoveContainer for \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\" failed" error="failed to set removing state for container \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\": container is already in removing state" Oct 2 19:40:13.269409 kubelet[2207]: E1002 19:40:13.269357 2207 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\": container is already in removing state" containerID="8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f" Oct 2 19:40:13.269568 kubelet[2207]: E1002 19:40:13.269421 2207 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f": container is already in removing state; Skipping pod "cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)" Oct 2 19:40:13.269903 kubelet[2207]: E1002 19:40:13.269862 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:40:13.272748 env[1750]: time="2023-10-02T19:40:13.272687314Z" level=info msg="RemoveContainer for \"8f27dd2f6a9e1e5e919122a64c95c97b6bdc775d5053dab65e38338e510afb0f\" returns successfully" Oct 2 19:40:13.681377 kubelet[2207]: E1002 19:40:13.681333 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:14.682853 kubelet[2207]: E1002 19:40:14.682748 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:15.369686 kubelet[2207]: W1002 19:40:15.369638 2207 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559c54e9_1f43_4791_8970_e50e73f5dcab.slice/cri-containerd-d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626.scope WatchSource:0}: task d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626 not found: not found Oct 2 19:40:15.683703 kubelet[2207]: E1002 19:40:15.683542 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:16.608887 kubelet[2207]: E1002 19:40:16.608818 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:16.684226 kubelet[2207]: E1002 19:40:16.684173 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:16.795980 kubelet[2207]: E1002 19:40:16.795932 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:17.685448 kubelet[2207]: E1002 19:40:17.685374 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:18.685597 kubelet[2207]: E1002 19:40:18.685552 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:19.686992 kubelet[2207]: E1002 19:40:19.686946 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:20.687832 kubelet[2207]: E1002 19:40:20.687764 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:21.688411 kubelet[2207]: E1002 19:40:21.688363 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:21.797352 kubelet[2207]: E1002 19:40:21.797319 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:22.690025 kubelet[2207]: E1002 19:40:22.689978 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:23.691307 kubelet[2207]: E1002 19:40:23.691261 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:24.692399 kubelet[2207]: E1002 19:40:24.692329 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:25.693057 kubelet[2207]: E1002 19:40:25.692992 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:26.104832 kubelet[2207]: E1002 19:40:26.104433 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:40:26.693280 kubelet[2207]: E1002 19:40:26.693229 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:26.798879 kubelet[2207]: E1002 19:40:26.798302 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:27.694567 kubelet[2207]: E1002 19:40:27.694518 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:28.695721 kubelet[2207]: E1002 19:40:28.695646 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:29.696872 kubelet[2207]: E1002 19:40:29.696823 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:30.698793 kubelet[2207]: E1002 19:40:30.698719 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:31.699260 kubelet[2207]: E1002 19:40:31.699186 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:31.801054 kubelet[2207]: E1002 19:40:31.800982 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:32.699899 kubelet[2207]: E1002 19:40:32.699807 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:33.701644 kubelet[2207]: E1002 19:40:33.701561 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:34.702889 kubelet[2207]: E1002 19:40:34.702844 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:35.704095 kubelet[2207]: E1002 19:40:35.704020 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:36.608676 kubelet[2207]: E1002 19:40:36.608631 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:36.704300 kubelet[2207]: E1002 19:40:36.704226 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:36.802019 kubelet[2207]: E1002 19:40:36.801931 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:37.704698 kubelet[2207]: E1002 19:40:37.704621 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:38.705046 kubelet[2207]: E1002 19:40:38.704998 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:39.706506 kubelet[2207]: E1002 19:40:39.706407 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:40.108451 env[1750]: time="2023-10-02T19:40:40.108031578Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:40:40.124029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355089747.mount: Deactivated successfully. Oct 2 19:40:40.134415 env[1750]: time="2023-10-02T19:40:40.134330816Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\"" Oct 2 19:40:40.135071 env[1750]: time="2023-10-02T19:40:40.135012636Z" level=info msg="StartContainer for \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\"" Oct 2 19:40:40.188820 systemd[1]: Started cri-containerd-5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4.scope. Oct 2 19:40:40.235965 systemd[1]: cri-containerd-5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4.scope: Deactivated successfully. Oct 2 19:40:40.260847 env[1750]: time="2023-10-02T19:40:40.260756522Z" level=info msg="shim disconnected" id=5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4 Oct 2 19:40:40.261196 env[1750]: time="2023-10-02T19:40:40.260847628Z" level=warning msg="cleaning up after shim disconnected" id=5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4 namespace=k8s.io Oct 2 19:40:40.261196 env[1750]: time="2023-10-02T19:40:40.260873933Z" level=info msg="cleaning up dead shim" Oct 2 19:40:40.290998 env[1750]: time="2023-10-02T19:40:40.290905621Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:40:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2768 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:40:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:40:40.291566 env[1750]: time="2023-10-02T19:40:40.291456626Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:40:40.295441 env[1750]: time="2023-10-02T19:40:40.295327704Z" level=error msg="Failed to pipe stdout of container \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\"" error="reading from a closed fifo" Oct 2 19:40:40.295619 env[1750]: time="2023-10-02T19:40:40.295495552Z" level=error msg="Failed to pipe stderr of container \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\"" error="reading from a closed fifo" Oct 2 19:40:40.300360 env[1750]: time="2023-10-02T19:40:40.300252396Z" level=error msg="StartContainer for \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:40:40.300759 kubelet[2207]: E1002 19:40:40.300685 2207 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4" Oct 2 19:40:40.300976 kubelet[2207]: E1002 19:40:40.300861 2207 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:40:40.300976 kubelet[2207]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:40:40.300976 kubelet[2207]: rm /hostbin/cilium-mount Oct 2 19:40:40.300976 kubelet[2207]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ggn8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:40:40.301342 kubelet[2207]: E1002 19:40:40.300951 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:40:40.331331 kubelet[2207]: I1002 19:40:40.331280 2207 scope.go:115] "RemoveContainer" containerID="d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626" Oct 2 19:40:40.331939 kubelet[2207]: I1002 19:40:40.331883 2207 scope.go:115] "RemoveContainer" containerID="d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626" Oct 2 19:40:40.334651 env[1750]: time="2023-10-02T19:40:40.334571812Z" level=info msg="RemoveContainer for \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\"" Oct 2 19:40:40.338910 env[1750]: time="2023-10-02T19:40:40.338833523Z" level=info msg="RemoveContainer for \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\"" Oct 2 19:40:40.340279 env[1750]: time="2023-10-02T19:40:40.340226517Z" level=info msg="RemoveContainer for \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\" returns successfully" Oct 2 19:40:40.340608 env[1750]: time="2023-10-02T19:40:40.338998143Z" level=error msg="RemoveContainer for \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\" failed" error="rpc error: code = NotFound desc = get container info: container \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\" in namespace \"k8s.io\": not found" Oct 2 19:40:40.340932 kubelet[2207]: E1002 19:40:40.340887 2207 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626\" in namespace \"k8s.io\": not found" containerID="d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626" Oct 2 19:40:40.341045 kubelet[2207]: E1002 19:40:40.340943 2207 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "d79184a0efe934e9a3c72dea7e5c154dd2e757b3c5ee671ef056d7069ca67626" in namespace "k8s.io": not found; Skipping pod "cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)" Oct 2 19:40:40.341822 kubelet[2207]: E1002 19:40:40.341388 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:40:40.707540 kubelet[2207]: E1002 19:40:40.707480 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:41.119643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4-rootfs.mount: Deactivated successfully. Oct 2 19:40:41.707847 kubelet[2207]: E1002 19:40:41.707802 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:41.803328 kubelet[2207]: E1002 19:40:41.803286 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:42.709177 kubelet[2207]: E1002 19:40:42.709104 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:43.366465 kubelet[2207]: W1002 19:40:43.366402 2207 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559c54e9_1f43_4791_8970_e50e73f5dcab.slice/cri-containerd-5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4.scope WatchSource:0}: task 5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4 not found: not found Oct 2 19:40:43.710338 kubelet[2207]: E1002 19:40:43.710272 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:44.710511 kubelet[2207]: E1002 19:40:44.710458 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:45.711991 kubelet[2207]: E1002 19:40:45.711907 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:46.712335 kubelet[2207]: E1002 19:40:46.712258 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:46.804115 kubelet[2207]: E1002 19:40:46.804066 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:47.712644 kubelet[2207]: E1002 19:40:47.712576 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:48.713810 kubelet[2207]: E1002 19:40:48.713737 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:49.714404 kubelet[2207]: E1002 19:40:49.714342 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:50.714867 kubelet[2207]: E1002 19:40:50.714812 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:51.715290 kubelet[2207]: E1002 19:40:51.715213 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:51.805512 kubelet[2207]: E1002 19:40:51.805480 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:52.716286 kubelet[2207]: E1002 19:40:52.716210 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:53.716718 kubelet[2207]: E1002 19:40:53.716670 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:54.105186 kubelet[2207]: E1002 19:40:54.105038 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:40:54.717815 kubelet[2207]: E1002 19:40:54.717744 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:55.718068 kubelet[2207]: E1002 19:40:55.718026 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:56.608531 kubelet[2207]: E1002 19:40:56.608435 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:56.719188 kubelet[2207]: E1002 19:40:56.719128 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:56.806554 kubelet[2207]: E1002 19:40:56.806520 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:57.720539 kubelet[2207]: E1002 19:40:57.720493 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:58.722321 kubelet[2207]: E1002 19:40:58.722244 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:59.723273 kubelet[2207]: E1002 19:40:59.723205 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:00.723684 kubelet[2207]: E1002 19:41:00.723614 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:01.724902 kubelet[2207]: E1002 19:41:01.724823 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:01.808135 kubelet[2207]: E1002 19:41:01.808087 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:02.725215 kubelet[2207]: E1002 19:41:02.725168 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:03.726731 kubelet[2207]: E1002 19:41:03.726631 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:04.727570 kubelet[2207]: E1002 19:41:04.727499 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:05.728166 kubelet[2207]: E1002 19:41:05.728081 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:06.728560 kubelet[2207]: E1002 19:41:06.728506 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:06.809508 kubelet[2207]: E1002 19:41:06.809301 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:07.729809 kubelet[2207]: E1002 19:41:07.729759 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:08.104588 kubelet[2207]: E1002 19:41:08.104299 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:41:08.731279 kubelet[2207]: E1002 19:41:08.731226 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:09.732995 kubelet[2207]: E1002 19:41:09.732941 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:10.734830 kubelet[2207]: E1002 19:41:10.734744 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:11.735991 kubelet[2207]: E1002 19:41:11.735938 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:11.811490 kubelet[2207]: E1002 19:41:11.811455 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:12.737324 kubelet[2207]: E1002 19:41:12.737278 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:13.738388 kubelet[2207]: E1002 19:41:13.738300 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:14.739579 kubelet[2207]: E1002 19:41:14.739507 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:15.740400 kubelet[2207]: E1002 19:41:15.740343 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:16.608660 kubelet[2207]: E1002 19:41:16.608591 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:16.741927 kubelet[2207]: E1002 19:41:16.741850 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:16.813588 kubelet[2207]: E1002 19:41:16.813551 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:17.742487 kubelet[2207]: E1002 19:41:17.742403 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:18.744069 kubelet[2207]: E1002 19:41:18.743994 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:19.744213 kubelet[2207]: E1002 19:41:19.744133 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:20.745543 kubelet[2207]: E1002 19:41:20.745494 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:21.747032 kubelet[2207]: E1002 19:41:21.746956 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:21.815803 kubelet[2207]: E1002 19:41:21.815649 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:22.748084 kubelet[2207]: E1002 19:41:22.748019 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:23.110456 env[1750]: time="2023-10-02T19:41:23.109014641Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:41:23.130469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582385298.mount: Deactivated successfully. Oct 2 19:41:23.138891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269311569.mount: Deactivated successfully. Oct 2 19:41:23.144219 env[1750]: time="2023-10-02T19:41:23.144126793Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\"" Oct 2 19:41:23.145515 env[1750]: time="2023-10-02T19:41:23.145459093Z" level=info msg="StartContainer for \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\"" Oct 2 19:41:23.196415 systemd[1]: Started cri-containerd-e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666.scope. Oct 2 19:41:23.235394 systemd[1]: cri-containerd-e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666.scope: Deactivated successfully. Oct 2 19:41:23.253990 env[1750]: time="2023-10-02T19:41:23.253898596Z" level=info msg="shim disconnected" id=e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666 Oct 2 19:41:23.253990 env[1750]: time="2023-10-02T19:41:23.253975061Z" level=warning msg="cleaning up after shim disconnected" id=e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666 namespace=k8s.io Oct 2 19:41:23.254363 env[1750]: time="2023-10-02T19:41:23.253997345Z" level=info msg="cleaning up dead shim" Oct 2 19:41:23.282716 env[1750]: time="2023-10-02T19:41:23.282642943Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:41:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2810 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:41:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:41:23.283254 env[1750]: time="2023-10-02T19:41:23.283117103Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:41:23.285325 env[1750]: time="2023-10-02T19:41:23.285246329Z" level=error msg="Failed to pipe stdout of container \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\"" error="reading from a closed fifo" Oct 2 19:41:23.286422 env[1750]: time="2023-10-02T19:41:23.286342442Z" level=error msg="Failed to pipe stderr of container \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\"" error="reading from a closed fifo" Oct 2 19:41:23.289017 env[1750]: time="2023-10-02T19:41:23.288907932Z" level=error msg="StartContainer for \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:41:23.289725 kubelet[2207]: E1002 19:41:23.289460 2207 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666" Oct 2 19:41:23.289725 kubelet[2207]: E1002 19:41:23.289616 2207 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:41:23.289725 kubelet[2207]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:41:23.289725 kubelet[2207]: rm /hostbin/cilium-mount Oct 2 19:41:23.290116 kubelet[2207]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ggn8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:41:23.290352 kubelet[2207]: E1002 19:41:23.289686 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:41:23.437787 kubelet[2207]: I1002 19:41:23.437746 2207 scope.go:115] "RemoveContainer" containerID="5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4" Oct 2 19:41:23.438347 kubelet[2207]: I1002 19:41:23.438303 2207 scope.go:115] "RemoveContainer" containerID="5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4" Oct 2 19:41:23.441556 env[1750]: time="2023-10-02T19:41:23.441395355Z" level=info msg="RemoveContainer for \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\"" Oct 2 19:41:23.443819 env[1750]: time="2023-10-02T19:41:23.443757023Z" level=info msg="RemoveContainer for \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\"" Oct 2 19:41:23.444324 env[1750]: time="2023-10-02T19:41:23.444255843Z" level=error msg="RemoveContainer for \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\" failed" error="failed to set removing state for container \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\": container is already in removing state" Oct 2 19:41:23.444596 kubelet[2207]: E1002 19:41:23.444550 2207 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\": container is already in removing state" containerID="5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4" Oct 2 19:41:23.444754 kubelet[2207]: E1002 19:41:23.444616 2207 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4": container is already in removing state; Skipping pod "cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)" Oct 2 19:41:23.445077 kubelet[2207]: E1002 19:41:23.445039 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:41:23.448044 env[1750]: time="2023-10-02T19:41:23.447973414Z" level=info msg="RemoveContainer for \"5abea6f2d03ad188eaa1c1a6624ff706076b9674b69446f77cf7c66833a59ef4\" returns successfully" Oct 2 19:41:23.750038 kubelet[2207]: E1002 19:41:23.749220 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:24.122257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666-rootfs.mount: Deactivated successfully. Oct 2 19:41:24.750104 kubelet[2207]: E1002 19:41:24.750026 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:25.750241 kubelet[2207]: E1002 19:41:25.750194 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:26.361881 kubelet[2207]: W1002 19:41:26.361833 2207 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559c54e9_1f43_4791_8970_e50e73f5dcab.slice/cri-containerd-e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666.scope WatchSource:0}: task e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666 not found: not found Oct 2 19:41:26.751859 kubelet[2207]: E1002 19:41:26.751788 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:26.816960 kubelet[2207]: E1002 19:41:26.816892 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:27.752493 kubelet[2207]: E1002 19:41:27.752425 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:28.752871 kubelet[2207]: E1002 19:41:28.752789 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:29.753275 kubelet[2207]: E1002 19:41:29.753201 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:30.754325 kubelet[2207]: E1002 19:41:30.754272 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:31.756204 kubelet[2207]: E1002 19:41:31.756121 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:31.818594 kubelet[2207]: E1002 19:41:31.818562 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:32.756978 kubelet[2207]: E1002 19:41:32.756904 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:33.757703 kubelet[2207]: E1002 19:41:33.757653 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:34.759305 kubelet[2207]: E1002 19:41:34.759256 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:35.760705 kubelet[2207]: E1002 19:41:35.760633 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:36.608472 kubelet[2207]: E1002 19:41:36.608424 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:36.761691 kubelet[2207]: E1002 19:41:36.761643 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:36.820206 kubelet[2207]: E1002 19:41:36.820126 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:37.763348 kubelet[2207]: E1002 19:41:37.763272 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:38.764074 kubelet[2207]: E1002 19:41:38.764010 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:39.106510 kubelet[2207]: E1002 19:41:39.106093 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:41:39.765071 kubelet[2207]: E1002 19:41:39.765027 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:40.766692 kubelet[2207]: E1002 19:41:40.766620 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:41.767697 kubelet[2207]: E1002 19:41:41.767623 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:41.822124 kubelet[2207]: E1002 19:41:41.822065 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:42.768764 kubelet[2207]: E1002 19:41:42.768692 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:43.769326 kubelet[2207]: E1002 19:41:43.769246 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:44.769882 kubelet[2207]: E1002 19:41:44.769770 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:45.770508 kubelet[2207]: E1002 19:41:45.770441 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:46.771526 kubelet[2207]: E1002 19:41:46.771446 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:46.824320 kubelet[2207]: E1002 19:41:46.824280 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:47.772734 kubelet[2207]: E1002 19:41:47.772680 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:48.774322 kubelet[2207]: E1002 19:41:48.774269 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:49.775763 kubelet[2207]: E1002 19:41:49.775705 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:50.777045 kubelet[2207]: E1002 19:41:50.776943 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:51.777745 kubelet[2207]: E1002 19:41:51.777695 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:51.825915 kubelet[2207]: E1002 19:41:51.825855 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:52.778587 kubelet[2207]: E1002 19:41:52.778534 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:53.105519 kubelet[2207]: E1002 19:41:53.105098 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:41:53.780335 kubelet[2207]: E1002 19:41:53.780284 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:54.781703 kubelet[2207]: E1002 19:41:54.781650 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:55.783523 kubelet[2207]: E1002 19:41:55.783474 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:56.609369 kubelet[2207]: E1002 19:41:56.609296 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:56.784511 kubelet[2207]: E1002 19:41:56.784407 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:56.827125 kubelet[2207]: E1002 19:41:56.827082 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:57.785486 kubelet[2207]: E1002 19:41:57.785437 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:58.786479 kubelet[2207]: E1002 19:41:58.786432 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:59.787612 kubelet[2207]: E1002 19:41:59.787533 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:00.788184 kubelet[2207]: E1002 19:42:00.788112 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:01.789182 kubelet[2207]: E1002 19:42:01.789102 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:01.828487 kubelet[2207]: E1002 19:42:01.828279 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:02.790420 kubelet[2207]: E1002 19:42:02.790372 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:03.791560 kubelet[2207]: E1002 19:42:03.791509 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:04.793312 kubelet[2207]: E1002 19:42:04.793230 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:05.794485 kubelet[2207]: E1002 19:42:05.794400 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:06.795410 kubelet[2207]: E1002 19:42:06.795360 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:06.829339 kubelet[2207]: E1002 19:42:06.829286 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:07.105615 kubelet[2207]: E1002 19:42:07.105107 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:42:07.796330 kubelet[2207]: E1002 19:42:07.796254 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:08.796676 kubelet[2207]: E1002 19:42:08.796624 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:09.798304 kubelet[2207]: E1002 19:42:09.798254 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:10.799688 kubelet[2207]: E1002 19:42:10.799639 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:11.800721 kubelet[2207]: E1002 19:42:11.800641 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:11.830801 kubelet[2207]: E1002 19:42:11.830751 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:12.801098 kubelet[2207]: E1002 19:42:12.801021 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:13.802328 kubelet[2207]: E1002 19:42:13.802255 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:14.802724 kubelet[2207]: E1002 19:42:14.802642 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:15.803902 kubelet[2207]: E1002 19:42:15.803821 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:16.609049 kubelet[2207]: E1002 19:42:16.608981 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:16.804881 kubelet[2207]: E1002 19:42:16.804835 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:16.832137 kubelet[2207]: E1002 19:42:16.832103 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:17.806260 kubelet[2207]: E1002 19:42:17.806215 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:18.807617 kubelet[2207]: E1002 19:42:18.807512 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:19.808420 kubelet[2207]: E1002 19:42:19.808354 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:20.105494 kubelet[2207]: E1002 19:42:20.105101 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:42:20.808826 kubelet[2207]: E1002 19:42:20.808762 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:21.809860 kubelet[2207]: E1002 19:42:21.809776 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:21.834092 kubelet[2207]: E1002 19:42:21.834054 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:22.810238 kubelet[2207]: E1002 19:42:22.810190 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:23.811869 kubelet[2207]: E1002 19:42:23.811791 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:24.812566 kubelet[2207]: E1002 19:42:24.812502 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:25.812998 kubelet[2207]: E1002 19:42:25.812894 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:26.813576 kubelet[2207]: E1002 19:42:26.813471 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:26.835735 kubelet[2207]: E1002 19:42:26.835699 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:27.814043 kubelet[2207]: E1002 19:42:27.813992 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:28.815215 kubelet[2207]: E1002 19:42:28.815135 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:29.816820 kubelet[2207]: E1002 19:42:29.816764 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:30.817859 kubelet[2207]: E1002 19:42:30.817755 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:31.818807 kubelet[2207]: E1002 19:42:31.818764 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:31.837127 kubelet[2207]: E1002 19:42:31.837085 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:32.820296 kubelet[2207]: E1002 19:42:32.820231 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:33.105511 kubelet[2207]: E1002 19:42:33.105109 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:42:33.821161 kubelet[2207]: E1002 19:42:33.821092 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:34.822243 kubelet[2207]: E1002 19:42:34.822188 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:35.823704 kubelet[2207]: E1002 19:42:35.823635 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:36.608879 kubelet[2207]: E1002 19:42:36.608836 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:36.824072 kubelet[2207]: E1002 19:42:36.824014 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:36.838174 kubelet[2207]: E1002 19:42:36.838106 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:37.825361 kubelet[2207]: E1002 19:42:37.825281 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:38.826904 kubelet[2207]: E1002 19:42:38.826857 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:39.828661 kubelet[2207]: E1002 19:42:39.828584 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:40.829798 kubelet[2207]: E1002 19:42:40.829711 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:41.830426 kubelet[2207]: E1002 19:42:41.830375 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:41.839650 kubelet[2207]: E1002 19:42:41.839613 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:42.831710 kubelet[2207]: E1002 19:42:42.831637 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:43.832860 kubelet[2207]: E1002 19:42:43.832797 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:44.107981 env[1750]: time="2023-10-02T19:42:44.107557985Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:42:44.127908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673074638.mount: Deactivated successfully. Oct 2 19:42:44.140211 env[1750]: time="2023-10-02T19:42:44.140085825Z" level=info msg="CreateContainer within sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3\"" Oct 2 19:42:44.141888 env[1750]: time="2023-10-02T19:42:44.141788967Z" level=info msg="StartContainer for \"14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3\"" Oct 2 19:42:44.196911 systemd[1]: Started cri-containerd-14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3.scope. Oct 2 19:42:44.230852 systemd[1]: cri-containerd-14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3.scope: Deactivated successfully. Oct 2 19:42:44.250352 env[1750]: time="2023-10-02T19:42:44.250268705Z" level=info msg="shim disconnected" id=14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3 Oct 2 19:42:44.250625 env[1750]: time="2023-10-02T19:42:44.250353593Z" level=warning msg="cleaning up after shim disconnected" id=14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3 namespace=k8s.io Oct 2 19:42:44.250625 env[1750]: time="2023-10-02T19:42:44.250379814Z" level=info msg="cleaning up dead shim" Oct 2 19:42:44.277339 env[1750]: time="2023-10-02T19:42:44.277248192Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2855 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:44.277825 env[1750]: time="2023-10-02T19:42:44.277719770Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:42:44.278323 env[1750]: time="2023-10-02T19:42:44.278246945Z" level=error msg="Failed to pipe stdout of container \"14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3\"" error="reading from a closed fifo" Oct 2 19:42:44.279386 env[1750]: time="2023-10-02T19:42:44.279307691Z" level=error msg="Failed to pipe stderr of container \"14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3\"" error="reading from a closed fifo" Oct 2 19:42:44.281711 env[1750]: time="2023-10-02T19:42:44.281612074Z" level=error msg="StartContainer for \"14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:44.282302 kubelet[2207]: E1002 19:42:44.281982 2207 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3" Oct 2 19:42:44.282302 kubelet[2207]: E1002 19:42:44.282181 2207 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:44.282302 kubelet[2207]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:44.282302 kubelet[2207]: rm /hostbin/cilium-mount Oct 2 19:42:44.282715 kubelet[2207]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ggn8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:44.282838 kubelet[2207]: E1002 19:42:44.282252 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:42:44.623721 kubelet[2207]: I1002 19:42:44.622779 2207 scope.go:115] "RemoveContainer" containerID="e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666" Oct 2 19:42:44.623721 kubelet[2207]: I1002 19:42:44.623420 2207 scope.go:115] "RemoveContainer" containerID="e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666" Oct 2 19:42:44.626740 env[1750]: time="2023-10-02T19:42:44.626661696Z" level=info msg="RemoveContainer for \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\"" Oct 2 19:42:44.629532 env[1750]: time="2023-10-02T19:42:44.629471499Z" level=info msg="RemoveContainer for \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\"" Oct 2 19:42:44.629995 env[1750]: time="2023-10-02T19:42:44.629927825Z" level=error msg="RemoveContainer for \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\" failed" error="failed to set removing state for container \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\": container is already in removing state" Oct 2 19:42:44.631371 kubelet[2207]: E1002 19:42:44.630423 2207 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\": container is already in removing state" containerID="e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666" Oct 2 19:42:44.631371 kubelet[2207]: E1002 19:42:44.630483 2207 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666": container is already in removing state; Skipping pod "cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)" Oct 2 19:42:44.631371 kubelet[2207]: E1002 19:42:44.630918 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-7shww_kube-system(559c54e9-1f43-4791-8970-e50e73f5dcab)\"" pod="kube-system/cilium-7shww" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab Oct 2 19:42:44.635454 env[1750]: time="2023-10-02T19:42:44.635370357Z" level=info msg="RemoveContainer for \"e5956d541b5dbafac6ae8be554a7e7fa9bb359d0ecc0de90e9c50cf5874cb666\" returns successfully" Oct 2 19:42:44.833042 kubelet[2207]: E1002 19:42:44.832996 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:45.123264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3-rootfs.mount: Deactivated successfully. Oct 2 19:42:45.632615 env[1750]: time="2023-10-02T19:42:45.629002976Z" level=info msg="StopPodSandbox for \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\"" Oct 2 19:42:45.632615 env[1750]: time="2023-10-02T19:42:45.629092821Z" level=info msg="Container to stop \"14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:42:45.631578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2-shm.mount: Deactivated successfully. Oct 2 19:42:45.650179 systemd[1]: cri-containerd-e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2.scope: Deactivated successfully. Oct 2 19:42:45.654197 kernel: kauditd_printk_skb: 283 callbacks suppressed Oct 2 19:42:45.654352 kernel: audit: type=1334 audit(1696275765.649:732): prog-id=80 op=UNLOAD Oct 2 19:42:45.649000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:42:45.658000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:42:45.663274 kernel: audit: type=1334 audit(1696275765.658:733): prog-id=83 op=UNLOAD Oct 2 19:42:45.703044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2-rootfs.mount: Deactivated successfully. Oct 2 19:42:45.720533 env[1750]: time="2023-10-02T19:42:45.720451315Z" level=info msg="shim disconnected" id=e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2 Oct 2 19:42:45.720533 env[1750]: time="2023-10-02T19:42:45.720528236Z" level=warning msg="cleaning up after shim disconnected" id=e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2 namespace=k8s.io Oct 2 19:42:45.720930 env[1750]: time="2023-10-02T19:42:45.720550785Z" level=info msg="cleaning up dead shim" Oct 2 19:42:45.747979 env[1750]: time="2023-10-02T19:42:45.747885862Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2886 runtime=io.containerd.runc.v2\n" Oct 2 19:42:45.748570 env[1750]: time="2023-10-02T19:42:45.748491429Z" level=info msg="TearDown network for sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" successfully" Oct 2 19:42:45.748570 env[1750]: time="2023-10-02T19:42:45.748549282Z" level=info msg="StopPodSandbox for \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" returns successfully" Oct 2 19:42:45.834024 kubelet[2207]: E1002 19:42:45.833903 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:45.855228 kubelet[2207]: I1002 19:42:45.854309 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggn8l\" (UniqueName: \"kubernetes.io/projected/559c54e9-1f43-4791-8970-e50e73f5dcab-kube-api-access-ggn8l\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855228 kubelet[2207]: I1002 19:42:45.854384 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-hostproc\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855228 kubelet[2207]: I1002 19:42:45.854430 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-host-proc-sys-kernel\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855228 kubelet[2207]: I1002 19:42:45.854475 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cni-path\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855228 kubelet[2207]: I1002 19:42:45.854518 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-etc-cni-netd\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855228 kubelet[2207]: I1002 19:42:45.854559 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-cgroup\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855733 kubelet[2207]: I1002 19:42:45.854604 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/559c54e9-1f43-4791-8970-e50e73f5dcab-clustermesh-secrets\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855733 kubelet[2207]: I1002 19:42:45.854648 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-host-proc-sys-net\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855733 kubelet[2207]: I1002 19:42:45.854689 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-run\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855733 kubelet[2207]: I1002 19:42:45.854725 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-bpf-maps\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855733 kubelet[2207]: I1002 19:42:45.854770 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-config-path\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.855733 kubelet[2207]: I1002 19:42:45.854813 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-xtables-lock\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.856182 kubelet[2207]: I1002 19:42:45.854852 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-lib-modules\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.856182 kubelet[2207]: I1002 19:42:45.854896 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/559c54e9-1f43-4791-8970-e50e73f5dcab-hubble-tls\") pod \"559c54e9-1f43-4791-8970-e50e73f5dcab\" (UID: \"559c54e9-1f43-4791-8970-e50e73f5dcab\") " Oct 2 19:42:45.856921 kubelet[2207]: I1002 19:42:45.856632 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-hostproc" (OuterVolumeSpecName: "hostproc") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:45.856921 kubelet[2207]: I1002 19:42:45.856736 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:45.856921 kubelet[2207]: I1002 19:42:45.856789 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cni-path" (OuterVolumeSpecName: "cni-path") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:45.856921 kubelet[2207]: I1002 19:42:45.856831 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:45.856921 kubelet[2207]: I1002 19:42:45.856873 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:45.859543 kubelet[2207]: I1002 19:42:45.856918 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:45.859543 kubelet[2207]: I1002 19:42:45.856957 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:45.859543 kubelet[2207]: I1002 19:42:45.856997 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:45.859543 kubelet[2207]: I1002 19:42:45.857045 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:45.859543 kubelet[2207]: W1002 19:42:45.857378 2207 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/559c54e9-1f43-4791-8970-e50e73f5dcab/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:42:45.861505 kubelet[2207]: I1002 19:42:45.861385 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:45.863062 kubelet[2207]: I1002 19:42:45.863005 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:42:45.869454 systemd[1]: var-lib-kubelet-pods-559c54e9\x2d1f43\x2d4791\x2d8970\x2de50e73f5dcab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:42:45.872183 kubelet[2207]: I1002 19:42:45.872108 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559c54e9-1f43-4791-8970-e50e73f5dcab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:45.879234 systemd[1]: var-lib-kubelet-pods-559c54e9\x2d1f43\x2d4791\x2d8970\x2de50e73f5dcab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:42:45.881727 kubelet[2207]: I1002 19:42:45.881652 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559c54e9-1f43-4791-8970-e50e73f5dcab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:42:45.885930 kubelet[2207]: I1002 19:42:45.883819 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559c54e9-1f43-4791-8970-e50e73f5dcab-kube-api-access-ggn8l" (OuterVolumeSpecName: "kube-api-access-ggn8l") pod "559c54e9-1f43-4791-8970-e50e73f5dcab" (UID: "559c54e9-1f43-4791-8970-e50e73f5dcab"). InnerVolumeSpecName "kube-api-access-ggn8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:45.955379 kubelet[2207]: I1002 19:42:45.955317 2207 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-host-proc-sys-net\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.955379 kubelet[2207]: I1002 19:42:45.955375 2207 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-run\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.955618 kubelet[2207]: I1002 19:42:45.955409 2207 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-bpf-maps\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.955618 kubelet[2207]: I1002 19:42:45.955436 2207 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-config-path\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.955618 kubelet[2207]: I1002 19:42:45.955460 2207 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-xtables-lock\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.955618 kubelet[2207]: I1002 19:42:45.955484 2207 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-lib-modules\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.955618 kubelet[2207]: I1002 19:42:45.955508 2207 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/559c54e9-1f43-4791-8970-e50e73f5dcab-hubble-tls\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.955618 kubelet[2207]: I1002 19:42:45.955532 2207 reconciler.go:399] "Volume detached for volume \"kube-api-access-ggn8l\" (UniqueName: \"kubernetes.io/projected/559c54e9-1f43-4791-8970-e50e73f5dcab-kube-api-access-ggn8l\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.955618 kubelet[2207]: I1002 19:42:45.955554 2207 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-hostproc\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.955618 kubelet[2207]: I1002 19:42:45.955577 2207 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-host-proc-sys-kernel\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.956207 kubelet[2207]: I1002 19:42:45.955601 2207 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cni-path\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.956207 kubelet[2207]: I1002 19:42:45.955624 2207 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-etc-cni-netd\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.956207 kubelet[2207]: I1002 19:42:45.955647 2207 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/559c54e9-1f43-4791-8970-e50e73f5dcab-cilium-cgroup\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:45.956207 kubelet[2207]: I1002 19:42:45.955670 2207 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/559c54e9-1f43-4791-8970-e50e73f5dcab-clustermesh-secrets\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:42:46.123072 systemd[1]: var-lib-kubelet-pods-559c54e9\x2d1f43\x2d4791\x2d8970\x2de50e73f5dcab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dggn8l.mount: Deactivated successfully. Oct 2 19:42:46.633230 kubelet[2207]: I1002 19:42:46.633188 2207 scope.go:115] "RemoveContainer" containerID="14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3" Oct 2 19:42:46.636893 env[1750]: time="2023-10-02T19:42:46.636486811Z" level=info msg="RemoveContainer for \"14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3\"" Oct 2 19:42:46.641099 env[1750]: time="2023-10-02T19:42:46.640914460Z" level=info msg="RemoveContainer for \"14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3\" returns successfully" Oct 2 19:42:46.647379 systemd[1]: Removed slice kubepods-burstable-pod559c54e9_1f43_4791_8970_e50e73f5dcab.slice. Oct 2 19:42:46.834419 kubelet[2207]: E1002 19:42:46.834356 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:46.841487 kubelet[2207]: E1002 19:42:46.841451 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:47.118002 kubelet[2207]: I1002 19:42:47.117938 2207 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=559c54e9-1f43-4791-8970-e50e73f5dcab path="/var/lib/kubelet/pods/559c54e9-1f43-4791-8970-e50e73f5dcab/volumes" Oct 2 19:42:47.356920 kubelet[2207]: W1002 19:42:47.356859 2207 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559c54e9_1f43_4791_8970_e50e73f5dcab.slice/cri-containerd-14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3.scope WatchSource:0}: container "14e0d522b315f78f4e235cc20cb1431124fc75a333134242a082413e67e68fa3" in namespace "k8s.io": not found Oct 2 19:42:47.835036 kubelet[2207]: E1002 19:42:47.834985 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:48.836602 kubelet[2207]: E1002 19:42:48.836549 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:49.838332 kubelet[2207]: E1002 19:42:49.838249 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:50.168107 kubelet[2207]: I1002 19:42:50.167978 2207 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:42:50.168425 kubelet[2207]: E1002 19:42:50.168368 2207 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.168568 kubelet[2207]: E1002 19:42:50.168545 2207 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.168697 kubelet[2207]: E1002 19:42:50.168675 2207 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.168837 kubelet[2207]: E1002 19:42:50.168811 2207 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.169002 kubelet[2207]: I1002 19:42:50.168978 2207 memory_manager.go:345] "RemoveStaleState removing state" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.169119 kubelet[2207]: I1002 19:42:50.169098 2207 memory_manager.go:345] "RemoveStaleState removing state" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.169256 kubelet[2207]: I1002 19:42:50.169234 2207 memory_manager.go:345] "RemoveStaleState removing state" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.169386 kubelet[2207]: I1002 19:42:50.169348 2207 memory_manager.go:345] "RemoveStaleState removing state" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.180668 kubelet[2207]: I1002 19:42:50.179429 2207 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:42:50.180668 kubelet[2207]: E1002 19:42:50.179568 2207 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.180668 kubelet[2207]: I1002 19:42:50.179616 2207 memory_manager.go:345] "RemoveStaleState removing state" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.180668 kubelet[2207]: I1002 19:42:50.179661 2207 memory_manager.go:345] "RemoveStaleState removing state" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.180668 kubelet[2207]: E1002 19:42:50.179704 2207 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="559c54e9-1f43-4791-8970-e50e73f5dcab" containerName="mount-cgroup" Oct 2 19:42:50.180343 systemd[1]: Created slice kubepods-besteffort-pod322f9277_7f70_4d2f_9338_46c47013693a.slice. Oct 2 19:42:50.194486 systemd[1]: Created slice kubepods-burstable-pod791452ab_79ca_4aeb_b69a_946b9bd5f34f.slice. Oct 2 19:42:50.285576 kubelet[2207]: I1002 19:42:50.285463 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-ipsec-secrets\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.285846 kubelet[2207]: I1002 19:42:50.285809 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-host-proc-sys-kernel\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286128 kubelet[2207]: I1002 19:42:50.286096 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-bpf-maps\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286284 kubelet[2207]: I1002 19:42:50.286214 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-hostproc\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286353 kubelet[2207]: I1002 19:42:50.286285 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cni-path\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286353 kubelet[2207]: I1002 19:42:50.286337 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-etc-cni-netd\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286498 kubelet[2207]: I1002 19:42:50.286383 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-xtables-lock\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286498 kubelet[2207]: I1002 19:42:50.286437 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/322f9277-7f70-4d2f-9338-46c47013693a-cilium-config-path\") pod \"cilium-operator-69b677f97c-bsj4c\" (UID: \"322f9277-7f70-4d2f-9338-46c47013693a\") " pod="kube-system/cilium-operator-69b677f97c-bsj4c" Oct 2 19:42:50.286498 kubelet[2207]: I1002 19:42:50.286481 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-run\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286676 kubelet[2207]: I1002 19:42:50.286526 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krnhw\" (UniqueName: \"kubernetes.io/projected/791452ab-79ca-4aeb-b69a-946b9bd5f34f-kube-api-access-krnhw\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286676 kubelet[2207]: I1002 19:42:50.286576 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/791452ab-79ca-4aeb-b69a-946b9bd5f34f-clustermesh-secrets\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286676 kubelet[2207]: I1002 19:42:50.286620 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/791452ab-79ca-4aeb-b69a-946b9bd5f34f-hubble-tls\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286676 kubelet[2207]: I1002 19:42:50.286670 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d78qz\" (UniqueName: \"kubernetes.io/projected/322f9277-7f70-4d2f-9338-46c47013693a-kube-api-access-d78qz\") pod \"cilium-operator-69b677f97c-bsj4c\" (UID: \"322f9277-7f70-4d2f-9338-46c47013693a\") " pod="kube-system/cilium-operator-69b677f97c-bsj4c" Oct 2 19:42:50.286921 kubelet[2207]: I1002 19:42:50.286714 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-cgroup\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286921 kubelet[2207]: I1002 19:42:50.286761 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-lib-modules\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286921 kubelet[2207]: I1002 19:42:50.286805 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-config-path\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.286921 kubelet[2207]: I1002 19:42:50.286847 2207 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-host-proc-sys-net\") pod \"cilium-fsmgj\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " pod="kube-system/cilium-fsmgj" Oct 2 19:42:50.489098 env[1750]: time="2023-10-02T19:42:50.488988713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-bsj4c,Uid:322f9277-7f70-4d2f-9338-46c47013693a,Namespace:kube-system,Attempt:0,}" Oct 2 19:42:50.511115 env[1750]: time="2023-10-02T19:42:50.511019453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fsmgj,Uid:791452ab-79ca-4aeb-b69a-946b9bd5f34f,Namespace:kube-system,Attempt:0,}" Oct 2 19:42:50.528764 env[1750]: time="2023-10-02T19:42:50.528589944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:42:50.528764 env[1750]: time="2023-10-02T19:42:50.528686809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:42:50.528764 env[1750]: time="2023-10-02T19:42:50.528715178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:42:50.529639 env[1750]: time="2023-10-02T19:42:50.529449627Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c pid=2918 runtime=io.containerd.runc.v2 Oct 2 19:42:50.550231 env[1750]: time="2023-10-02T19:42:50.550057754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:42:50.550231 env[1750]: time="2023-10-02T19:42:50.550166680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:42:50.550578 env[1750]: time="2023-10-02T19:42:50.550200257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:42:50.551638 env[1750]: time="2023-10-02T19:42:50.551446695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91 pid=2934 runtime=io.containerd.runc.v2 Oct 2 19:42:50.570551 systemd[1]: Started cri-containerd-21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c.scope. Oct 2 19:42:50.605007 systemd[1]: Started cri-containerd-9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91.scope. Oct 2 19:42:50.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640213 kernel: audit: type=1400 audit(1696275770.623:734): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640392 kernel: audit: type=1400 audit(1696275770.623:735): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.652998 kernel: audit: type=1400 audit(1696275770.623:736): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.653132 kernel: audit: type=1400 audit(1696275770.623:737): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.665716 kernel: audit: type=1400 audit(1696275770.623:738): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.674767 kernel: audit: type=1400 audit(1696275770.623:739): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.682763 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:42:50.682920 kernel: audit: type=1400 audit(1696275770.623:740): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.693121 kernel: audit: audit_lost=3 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:42:50.694850 kernel: audit: backlog limit exceeded Oct 2 19:42:50.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.702906 kernel: audit: type=1400 audit(1696275770.623:741): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.710956 kernel: audit: type=1400 audit(1696275770.623:742): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.711077 kernel: audit: type=1400 audit(1696275770.634:743): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.634000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.634000 audit: BPF prog-id=87 op=LOAD Oct 2 19:42:50.722742 kernel: audit: type=1334 audit(1696275770.634:744): prog-id=87 op=LOAD Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400011db38 a2=10 a3=0 items=0 ppid=2918 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231613830626432663063363039333633666636393038643065613336 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400011d5a0 a2=3c a3=0 items=0 ppid=2918 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231613830626432663063363039333633666636393038643065613336 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit: BPF prog-id=88 op=LOAD Oct 2 19:42:50.640000 audit[2931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011d8e0 a2=78 a3=0 items=0 ppid=2918 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231613830626432663063363039333633666636393038643065613336 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.640000 audit: BPF prog-id=89 op=LOAD Oct 2 19:42:50.640000 audit[2931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400011d670 a2=78 a3=0 items=0 ppid=2918 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:50.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231613830626432663063363039333633666636393038643065613336 Oct 2 19:42:50.640000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:42:50.641000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:42:50.641000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.641000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.641000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.641000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.641000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.641000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.641000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.641000 audit[2931]: AVC avc: denied { perfmon } for pid=2931 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.641000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.641000 audit[2931]: AVC avc: denied { bpf } for pid=2931 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.641000 audit: BPF prog-id=90 op=LOAD Oct 2 19:42:50.641000 audit[2931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011db40 a2=78 a3=0 items=0 ppid=2918 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:50.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231613830626432663063363039333633666636393038643065613336 Oct 2 19:42:50.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.693000 audit: BPF prog-id=91 op=LOAD Oct 2 19:42:50.702000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.702000 audit[2954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2934 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:50.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323464393662653032343036396664376536636437303064306632 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2934 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:50.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323464393662653032343036396664376536636437303064306632 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit: BPF prog-id=92 op=LOAD Oct 2 19:42:50.727000 audit[2954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2934 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:50.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323464393662653032343036396664376536636437303064306632 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit: BPF prog-id=93 op=LOAD Oct 2 19:42:50.727000 audit[2954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2934 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:50.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323464393662653032343036396664376536636437303064306632 Oct 2 19:42:50.727000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:42:50.727000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { perfmon } for pid=2954 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit[2954]: AVC avc: denied { bpf } for pid=2954 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:50.727000 audit: BPF prog-id=94 op=LOAD Oct 2 19:42:50.727000 audit[2954]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2934 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:50.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323464393662653032343036396664376536636437303064306632 Oct 2 19:42:50.767095 env[1750]: time="2023-10-02T19:42:50.762609087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fsmgj,Uid:791452ab-79ca-4aeb-b69a-946b9bd5f34f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\"" Oct 2 19:42:50.772988 env[1750]: time="2023-10-02T19:42:50.772912155Z" level=info msg="CreateContainer within sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:42:50.775784 env[1750]: time="2023-10-02T19:42:50.775696216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-bsj4c,Uid:322f9277-7f70-4d2f-9338-46c47013693a,Namespace:kube-system,Attempt:0,} returns sandbox id \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\"" Oct 2 19:42:50.780458 env[1750]: time="2023-10-02T19:42:50.780232523Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:42:50.797356 env[1750]: time="2023-10-02T19:42:50.797251304Z" level=info msg="CreateContainer within sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\"" Oct 2 19:42:50.798121 env[1750]: time="2023-10-02T19:42:50.798073090Z" level=info msg="StartContainer for \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\"" Oct 2 19:42:50.839604 kubelet[2207]: E1002 19:42:50.839551 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:50.845395 systemd[1]: Started cri-containerd-520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488.scope. Oct 2 19:42:50.890137 systemd[1]: cri-containerd-520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488.scope: Deactivated successfully. Oct 2 19:42:50.921214 env[1750]: time="2023-10-02T19:42:50.921090441Z" level=info msg="shim disconnected" id=520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488 Oct 2 19:42:50.921485 env[1750]: time="2023-10-02T19:42:50.921215747Z" level=warning msg="cleaning up after shim disconnected" id=520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488 namespace=k8s.io Oct 2 19:42:50.921485 env[1750]: time="2023-10-02T19:42:50.921263364Z" level=info msg="cleaning up dead shim" Oct 2 19:42:50.950780 env[1750]: time="2023-10-02T19:42:50.950684313Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3016 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:50.951300 env[1750]: time="2023-10-02T19:42:50.951190506Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:42:50.951708 env[1750]: time="2023-10-02T19:42:50.951643718Z" level=error msg="Failed to pipe stderr of container \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\"" error="reading from a closed fifo" Oct 2 19:42:50.951915 env[1750]: time="2023-10-02T19:42:50.951651746Z" level=error msg="Failed to pipe stdout of container \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\"" error="reading from a closed fifo" Oct 2 19:42:50.954283 env[1750]: time="2023-10-02T19:42:50.954092289Z" level=error msg="StartContainer for \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:50.954564 kubelet[2207]: E1002 19:42:50.954527 2207 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488" Oct 2 19:42:50.954741 kubelet[2207]: E1002 19:42:50.954685 2207 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:50.954741 kubelet[2207]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:50.954741 kubelet[2207]: rm /hostbin/cilium-mount Oct 2 19:42:50.954741 kubelet[2207]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-krnhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:50.955352 kubelet[2207]: E1002 19:42:50.954778 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-fsmgj" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f Oct 2 19:42:51.669886 env[1750]: time="2023-10-02T19:42:51.669813110Z" level=info msg="CreateContainer within sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:42:51.694353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474113959.mount: Deactivated successfully. Oct 2 19:42:51.706687 env[1750]: time="2023-10-02T19:42:51.706619461Z" level=info msg="CreateContainer within sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\"" Oct 2 19:42:51.708128 env[1750]: time="2023-10-02T19:42:51.708048962Z" level=info msg="StartContainer for \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\"" Oct 2 19:42:51.761821 systemd[1]: Started cri-containerd-26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef.scope. Oct 2 19:42:51.812398 systemd[1]: cri-containerd-26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef.scope: Deactivated successfully. Oct 2 19:42:51.834825 env[1750]: time="2023-10-02T19:42:51.834722811Z" level=info msg="shim disconnected" id=26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef Oct 2 19:42:51.835298 env[1750]: time="2023-10-02T19:42:51.835242456Z" level=warning msg="cleaning up after shim disconnected" id=26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef namespace=k8s.io Oct 2 19:42:51.835521 env[1750]: time="2023-10-02T19:42:51.835480588Z" level=info msg="cleaning up dead shim" Oct 2 19:42:51.840776 kubelet[2207]: E1002 19:42:51.840663 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:51.851493 kubelet[2207]: E1002 19:42:51.851434 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:51.875499 env[1750]: time="2023-10-02T19:42:51.875429817Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3056 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:51.876461 env[1750]: time="2023-10-02T19:42:51.876331537Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 19:42:51.876956 env[1750]: time="2023-10-02T19:42:51.876874774Z" level=error msg="Failed to pipe stdout of container \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\"" error="reading from a closed fifo" Oct 2 19:42:51.881399 env[1750]: time="2023-10-02T19:42:51.881295843Z" level=error msg="Failed to pipe stderr of container \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\"" error="reading from a closed fifo" Oct 2 19:42:51.884109 env[1750]: time="2023-10-02T19:42:51.883987765Z" level=error msg="StartContainer for \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:51.885813 kubelet[2207]: E1002 19:42:51.884726 2207 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef" Oct 2 19:42:51.885813 kubelet[2207]: E1002 19:42:51.885558 2207 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:51.885813 kubelet[2207]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:51.885813 kubelet[2207]: rm /hostbin/cilium-mount Oct 2 19:42:51.886372 kubelet[2207]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-krnhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:51.886544 kubelet[2207]: E1002 19:42:51.885739 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-fsmgj" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f Oct 2 19:42:52.402113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef-rootfs.mount: Deactivated successfully. Oct 2 19:42:52.668057 kubelet[2207]: I1002 19:42:52.667916 2207 scope.go:115] "RemoveContainer" containerID="520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488" Oct 2 19:42:52.668847 kubelet[2207]: I1002 19:42:52.668804 2207 scope.go:115] "RemoveContainer" containerID="520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488" Oct 2 19:42:52.679596 env[1750]: time="2023-10-02T19:42:52.679487716Z" level=info msg="RemoveContainer for \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\"" Oct 2 19:42:52.681424 env[1750]: time="2023-10-02T19:42:52.681360612Z" level=info msg="RemoveContainer for \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\"" Oct 2 19:42:52.681836 env[1750]: time="2023-10-02T19:42:52.681763291Z" level=error msg="RemoveContainer for \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\" failed" error="failed to set removing state for container \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\": container is already in removing state" Oct 2 19:42:52.682360 kubelet[2207]: E1002 19:42:52.682305 2207 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\": container is already in removing state" containerID="520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488" Oct 2 19:42:52.682545 kubelet[2207]: E1002 19:42:52.682376 2207 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488": container is already in removing state; Skipping pod "cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f)" Oct 2 19:42:52.682953 kubelet[2207]: E1002 19:42:52.682838 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f)\"" pod="kube-system/cilium-fsmgj" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f Oct 2 19:42:52.686209 env[1750]: time="2023-10-02T19:42:52.686076440Z" level=info msg="RemoveContainer for \"520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488\" returns successfully" Oct 2 19:42:52.840951 kubelet[2207]: E1002 19:42:52.840899 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:53.207050 env[1750]: time="2023-10-02T19:42:53.206982953Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:42:53.210037 env[1750]: time="2023-10-02T19:42:53.209976800Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:42:53.213090 env[1750]: time="2023-10-02T19:42:53.213018875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:42:53.214418 env[1750]: time="2023-10-02T19:42:53.214353778Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f\"" Oct 2 19:42:53.217631 env[1750]: time="2023-10-02T19:42:53.217567252Z" level=info msg="CreateContainer within sandbox \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:42:53.244360 env[1750]: time="2023-10-02T19:42:53.244287790Z" level=info msg="CreateContainer within sandbox \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\"" Oct 2 19:42:53.245848 env[1750]: time="2023-10-02T19:42:53.245775059Z" level=info msg="StartContainer for \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\"" Oct 2 19:42:53.306431 systemd[1]: run-containerd-runc-k8s.io-ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb-runc.mCBoQv.mount: Deactivated successfully. Oct 2 19:42:53.311675 systemd[1]: Started cri-containerd-ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb.scope. Oct 2 19:42:53.350000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.350000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.350000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.350000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.350000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.350000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.350000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.350000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.350000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.350000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.350000 audit: BPF prog-id=95 op=LOAD Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2918 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:53.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343432633337656461373866396561306666306634366430623361 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2918 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:53.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343432633337656461373866396561306666306634366430623361 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.353000 audit: BPF prog-id=96 op=LOAD Oct 2 19:42:53.353000 audit[3079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2918 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:53.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343432633337656461373866396561306666306634366430623361 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit: BPF prog-id=97 op=LOAD Oct 2 19:42:53.354000 audit[3079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2918 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:53.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343432633337656461373866396561306666306634366430623361 Oct 2 19:42:53.354000 audit: BPF prog-id=97 op=UNLOAD Oct 2 19:42:53.354000 audit: BPF prog-id=96 op=UNLOAD Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { perfmon } for pid=3079 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit[3079]: AVC avc: denied { bpf } for pid=3079 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:53.354000 audit: BPF prog-id=98 op=LOAD Oct 2 19:42:53.354000 audit[3079]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2918 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:53.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343432633337656461373866396561306666306634366430623361 Oct 2 19:42:53.390204 env[1750]: time="2023-10-02T19:42:53.390099213Z" level=info msg="StartContainer for \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\" returns successfully" Oct 2 19:42:53.470000 audit[3090]: AVC avc: denied { map_create } for pid=3090 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c819,c1021 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c819,c1021 tclass=bpf permissive=0 Oct 2 19:42:53.470000 audit[3090]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=40004cb768 a2=48 a3=0 items=0 ppid=2918 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c819,c1021 key=(null) Oct 2 19:42:53.470000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:42:53.675640 kubelet[2207]: E1002 19:42:53.675591 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f)\"" pod="kube-system/cilium-fsmgj" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f Oct 2 19:42:53.842123 kubelet[2207]: E1002 19:42:53.841803 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:54.027374 kubelet[2207]: W1002 19:42:54.027302 2207 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791452ab_79ca_4aeb_b69a_946b9bd5f34f.slice/cri-containerd-520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488.scope WatchSource:0}: container "520ec7518aebf699643e8753bc3bea4bf6ff5dbd2d32003df95b86be1347c488" in namespace "k8s.io": not found Oct 2 19:42:54.842054 kubelet[2207]: E1002 19:42:54.842009 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:55.842619 kubelet[2207]: E1002 19:42:55.842572 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:56.608990 kubelet[2207]: E1002 19:42:56.608912 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:56.844424 kubelet[2207]: E1002 19:42:56.844350 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:56.853555 kubelet[2207]: E1002 19:42:56.853513 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:57.137421 kubelet[2207]: W1002 19:42:57.137371 2207 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791452ab_79ca_4aeb_b69a_946b9bd5f34f.slice/cri-containerd-26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef.scope WatchSource:0}: task 26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef not found: not found Oct 2 19:42:57.845461 kubelet[2207]: E1002 19:42:57.845398 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:58.846887 kubelet[2207]: E1002 19:42:58.846841 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:59.848108 kubelet[2207]: E1002 19:42:59.848065 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:00.849584 kubelet[2207]: E1002 19:43:00.849536 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:01.850719 kubelet[2207]: E1002 19:43:01.850659 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:01.855793 kubelet[2207]: E1002 19:43:01.855747 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:02.851851 kubelet[2207]: E1002 19:43:02.851774 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:03.852787 kubelet[2207]: E1002 19:43:03.852727 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:04.853760 kubelet[2207]: E1002 19:43:04.853686 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:05.854665 kubelet[2207]: E1002 19:43:05.854621 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:06.855811 kubelet[2207]: E1002 19:43:06.855735 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:06.856664 kubelet[2207]: E1002 19:43:06.856627 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:07.110389 env[1750]: time="2023-10-02T19:43:07.110038990Z" level=info msg="CreateContainer within sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:43:07.130493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052448318.mount: Deactivated successfully. Oct 2 19:43:07.140475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount151985404.mount: Deactivated successfully. Oct 2 19:43:07.141826 env[1750]: time="2023-10-02T19:43:07.141699252Z" level=info msg="CreateContainer within sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\"" Oct 2 19:43:07.142984 env[1750]: time="2023-10-02T19:43:07.142912278Z" level=info msg="StartContainer for \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\"" Oct 2 19:43:07.192180 systemd[1]: Started cri-containerd-850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049.scope. Oct 2 19:43:07.231497 systemd[1]: cri-containerd-850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049.scope: Deactivated successfully. Oct 2 19:43:07.464838 env[1750]: time="2023-10-02T19:43:07.464741705Z" level=info msg="shim disconnected" id=850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049 Oct 2 19:43:07.465199 env[1750]: time="2023-10-02T19:43:07.465138263Z" level=warning msg="cleaning up after shim disconnected" id=850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049 namespace=k8s.io Oct 2 19:43:07.465339 env[1750]: time="2023-10-02T19:43:07.465310729Z" level=info msg="cleaning up dead shim" Oct 2 19:43:07.492385 env[1750]: time="2023-10-02T19:43:07.492311312Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3134 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:07.492866 env[1750]: time="2023-10-02T19:43:07.492776774Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:43:07.494370 env[1750]: time="2023-10-02T19:43:07.494313001Z" level=error msg="Failed to pipe stderr of container \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\"" error="reading from a closed fifo" Oct 2 19:43:07.497285 env[1750]: time="2023-10-02T19:43:07.497212986Z" level=error msg="Failed to pipe stdout of container \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\"" error="reading from a closed fifo" Oct 2 19:43:07.499837 env[1750]: time="2023-10-02T19:43:07.499727827Z" level=error msg="StartContainer for \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:07.500299 kubelet[2207]: E1002 19:43:07.500240 2207 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049" Oct 2 19:43:07.500506 kubelet[2207]: E1002 19:43:07.500474 2207 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:07.500506 kubelet[2207]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:07.500506 kubelet[2207]: rm /hostbin/cilium-mount Oct 2 19:43:07.500506 kubelet[2207]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-krnhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:07.500827 kubelet[2207]: E1002 19:43:07.500574 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-fsmgj" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f Oct 2 19:43:07.722897 kubelet[2207]: I1002 19:43:07.722753 2207 scope.go:115] "RemoveContainer" containerID="26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef" Oct 2 19:43:07.724877 kubelet[2207]: I1002 19:43:07.724820 2207 scope.go:115] "RemoveContainer" containerID="26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef" Oct 2 19:43:07.728085 env[1750]: time="2023-10-02T19:43:07.727822062Z" level=info msg="RemoveContainer for \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\"" Oct 2 19:43:07.730046 env[1750]: time="2023-10-02T19:43:07.729301623Z" level=info msg="RemoveContainer for \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\"" Oct 2 19:43:07.730046 env[1750]: time="2023-10-02T19:43:07.729567559Z" level=error msg="RemoveContainer for \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\" failed" error="failed to set removing state for container \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\": container is already in removing state" Oct 2 19:43:07.731000 kubelet[2207]: E1002 19:43:07.730332 2207 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\": container is already in removing state" containerID="26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef" Oct 2 19:43:07.731000 kubelet[2207]: E1002 19:43:07.730396 2207 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef": container is already in removing state; Skipping pod "cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f)" Oct 2 19:43:07.731558 kubelet[2207]: E1002 19:43:07.731513 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f)\"" pod="kube-system/cilium-fsmgj" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f Oct 2 19:43:07.735188 env[1750]: time="2023-10-02T19:43:07.735083007Z" level=info msg="RemoveContainer for \"26c92c0a133de7af4e3ee10edfc6c84d10fccdc0743fab210049358fe49700ef\" returns successfully" Oct 2 19:43:07.856565 kubelet[2207]: E1002 19:43:07.856498 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:08.124669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049-rootfs.mount: Deactivated successfully. Oct 2 19:43:08.857688 kubelet[2207]: E1002 19:43:08.857645 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:09.859235 kubelet[2207]: E1002 19:43:09.859177 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:10.570721 kubelet[2207]: W1002 19:43:10.570670 2207 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791452ab_79ca_4aeb_b69a_946b9bd5f34f.slice/cri-containerd-850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049.scope WatchSource:0}: task 850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049 not found: not found Oct 2 19:43:10.859784 kubelet[2207]: E1002 19:43:10.859647 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:11.857632 kubelet[2207]: E1002 19:43:11.857568 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:11.860726 kubelet[2207]: E1002 19:43:11.860694 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:12.861824 kubelet[2207]: E1002 19:43:12.861775 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:13.863294 kubelet[2207]: E1002 19:43:13.863222 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:14.864037 kubelet[2207]: E1002 19:43:14.863990 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:15.865872 kubelet[2207]: E1002 19:43:15.865800 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:16.609362 kubelet[2207]: E1002 19:43:16.609319 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:16.677356 env[1750]: time="2023-10-02T19:43:16.677282105Z" level=info msg="StopPodSandbox for \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\"" Oct 2 19:43:16.677900 env[1750]: time="2023-10-02T19:43:16.677474756Z" level=info msg="TearDown network for sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" successfully" Oct 2 19:43:16.677900 env[1750]: time="2023-10-02T19:43:16.677558397Z" level=info msg="StopPodSandbox for \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" returns successfully" Oct 2 19:43:16.678502 env[1750]: time="2023-10-02T19:43:16.678419024Z" level=info msg="RemovePodSandbox for \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\"" Oct 2 19:43:16.678612 env[1750]: time="2023-10-02T19:43:16.678505833Z" level=info msg="Forcibly stopping sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\"" Oct 2 19:43:16.678736 env[1750]: time="2023-10-02T19:43:16.678676631Z" level=info msg="TearDown network for sandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" successfully" Oct 2 19:43:16.683636 env[1750]: time="2023-10-02T19:43:16.683562760Z" level=info msg="RemovePodSandbox \"e6419a9f35436e0380339825543be3ebe508e1120720c8046d8446842837c4c2\" returns successfully" Oct 2 19:43:16.858859 kubelet[2207]: E1002 19:43:16.858792 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:16.866874 kubelet[2207]: E1002 19:43:16.866191 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:17.867297 kubelet[2207]: E1002 19:43:17.867240 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:18.868985 kubelet[2207]: E1002 19:43:18.868941 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:19.870187 kubelet[2207]: E1002 19:43:19.870109 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:20.872035 kubelet[2207]: E1002 19:43:20.871928 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:21.860494 kubelet[2207]: E1002 19:43:21.860436 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:21.872307 kubelet[2207]: E1002 19:43:21.872240 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:22.105958 kubelet[2207]: E1002 19:43:22.105919 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f)\"" pod="kube-system/cilium-fsmgj" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f Oct 2 19:43:22.873257 kubelet[2207]: E1002 19:43:22.873212 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:23.874726 kubelet[2207]: E1002 19:43:23.874674 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:24.875806 kubelet[2207]: E1002 19:43:24.875729 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:25.876501 kubelet[2207]: E1002 19:43:25.876453 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:26.861876 kubelet[2207]: E1002 19:43:26.861781 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:26.877973 kubelet[2207]: E1002 19:43:26.877944 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:27.879124 kubelet[2207]: E1002 19:43:27.879056 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:28.880353 kubelet[2207]: E1002 19:43:28.880280 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:29.881522 kubelet[2207]: E1002 19:43:29.881475 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:30.882490 kubelet[2207]: E1002 19:43:30.882443 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:31.863680 kubelet[2207]: E1002 19:43:31.863619 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:31.884327 kubelet[2207]: E1002 19:43:31.884254 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:32.884429 kubelet[2207]: E1002 19:43:32.884385 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:33.886203 kubelet[2207]: E1002 19:43:33.886157 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:34.108116 env[1750]: time="2023-10-02T19:43:34.108038705Z" level=info msg="CreateContainer within sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:43:34.125653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740341741.mount: Deactivated successfully. Oct 2 19:43:34.138594 env[1750]: time="2023-10-02T19:43:34.138436447Z" level=info msg="CreateContainer within sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70\"" Oct 2 19:43:34.139497 env[1750]: time="2023-10-02T19:43:34.139451010Z" level=info msg="StartContainer for \"590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70\"" Oct 2 19:43:34.192106 systemd[1]: Started cri-containerd-590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70.scope. Oct 2 19:43:34.231178 systemd[1]: cri-containerd-590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70.scope: Deactivated successfully. Oct 2 19:43:34.251845 env[1750]: time="2023-10-02T19:43:34.251746588Z" level=info msg="shim disconnected" id=590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70 Oct 2 19:43:34.252241 env[1750]: time="2023-10-02T19:43:34.251853197Z" level=warning msg="cleaning up after shim disconnected" id=590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70 namespace=k8s.io Oct 2 19:43:34.252241 env[1750]: time="2023-10-02T19:43:34.251876861Z" level=info msg="cleaning up dead shim" Oct 2 19:43:34.279748 env[1750]: time="2023-10-02T19:43:34.279648817Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3176 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:34Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:34.280243 env[1750]: time="2023-10-02T19:43:34.280115287Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:43:34.280652 env[1750]: time="2023-10-02T19:43:34.280577880Z" level=error msg="Failed to pipe stdout of container \"590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70\"" error="reading from a closed fifo" Oct 2 19:43:34.280774 env[1750]: time="2023-10-02T19:43:34.280724761Z" level=error msg="Failed to pipe stderr of container \"590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70\"" error="reading from a closed fifo" Oct 2 19:43:34.283189 env[1750]: time="2023-10-02T19:43:34.283066287Z" level=error msg="StartContainer for \"590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:34.283437 kubelet[2207]: E1002 19:43:34.283396 2207 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70" Oct 2 19:43:34.283621 kubelet[2207]: E1002 19:43:34.283540 2207 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:34.283621 kubelet[2207]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:34.283621 kubelet[2207]: rm /hostbin/cilium-mount Oct 2 19:43:34.283621 kubelet[2207]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-krnhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:34.283925 kubelet[2207]: E1002 19:43:34.283611 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-fsmgj" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f Oct 2 19:43:34.795905 kubelet[2207]: I1002 19:43:34.795862 2207 scope.go:115] "RemoveContainer" containerID="850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049" Oct 2 19:43:34.796794 kubelet[2207]: I1002 19:43:34.796746 2207 scope.go:115] "RemoveContainer" containerID="850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049" Oct 2 19:43:34.800365 env[1750]: time="2023-10-02T19:43:34.800289474Z" level=info msg="RemoveContainer for \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\"" Oct 2 19:43:34.801282 env[1750]: time="2023-10-02T19:43:34.801203596Z" level=info msg="RemoveContainer for \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\"" Oct 2 19:43:34.801478 env[1750]: time="2023-10-02T19:43:34.801378738Z" level=error msg="RemoveContainer for \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\" failed" error="failed to set removing state for container \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\": container is already in removing state" Oct 2 19:43:34.801873 kubelet[2207]: E1002 19:43:34.801815 2207 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\": container is already in removing state" containerID="850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049" Oct 2 19:43:34.802106 kubelet[2207]: E1002 19:43:34.801888 2207 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049": container is already in removing state; Skipping pod "cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f)" Oct 2 19:43:34.802557 kubelet[2207]: E1002 19:43:34.802470 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f)\"" pod="kube-system/cilium-fsmgj" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f Oct 2 19:43:34.807342 env[1750]: time="2023-10-02T19:43:34.807240335Z" level=info msg="RemoveContainer for \"850883d6c16371988983cabeb2ff317f60c5e83229cbf8bf44af906e86d35049\" returns successfully" Oct 2 19:43:34.887995 kubelet[2207]: E1002 19:43:34.887907 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:35.121477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70-rootfs.mount: Deactivated successfully. Oct 2 19:43:35.889091 kubelet[2207]: E1002 19:43:35.889042 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:36.608820 kubelet[2207]: E1002 19:43:36.608776 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:36.865593 kubelet[2207]: E1002 19:43:36.865186 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:36.890437 kubelet[2207]: E1002 19:43:36.890387 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:37.357176 kubelet[2207]: W1002 19:43:37.357091 2207 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791452ab_79ca_4aeb_b69a_946b9bd5f34f.slice/cri-containerd-590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70.scope WatchSource:0}: task 590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70 not found: not found Oct 2 19:43:37.892190 kubelet[2207]: E1002 19:43:37.892106 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:38.893571 kubelet[2207]: E1002 19:43:38.893509 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:39.894331 kubelet[2207]: E1002 19:43:39.894289 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:40.895632 kubelet[2207]: E1002 19:43:40.895579 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:41.866993 kubelet[2207]: E1002 19:43:41.866948 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:41.897422 kubelet[2207]: E1002 19:43:41.897357 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:42.897695 kubelet[2207]: E1002 19:43:42.897641 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:43.898045 kubelet[2207]: E1002 19:43:43.898005 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:44.899800 kubelet[2207]: E1002 19:43:44.899729 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:45.900022 kubelet[2207]: E1002 19:43:45.899943 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:46.105123 kubelet[2207]: E1002 19:43:46.105078 2207 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-fsmgj_kube-system(791452ab-79ca-4aeb-b69a-946b9bd5f34f)\"" pod="kube-system/cilium-fsmgj" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f Oct 2 19:43:46.868664 kubelet[2207]: E1002 19:43:46.868627 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:46.901063 kubelet[2207]: E1002 19:43:46.901014 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:47.902098 kubelet[2207]: E1002 19:43:47.902030 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:48.902524 kubelet[2207]: E1002 19:43:48.902480 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:49.903277 kubelet[2207]: E1002 19:43:49.903221 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:50.904591 kubelet[2207]: E1002 19:43:50.904488 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:51.870625 kubelet[2207]: E1002 19:43:51.870563 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:51.904740 kubelet[2207]: E1002 19:43:51.904667 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:51.935228 env[1750]: time="2023-10-02T19:43:51.935128509Z" level=info msg="StopPodSandbox for \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\"" Oct 2 19:43:51.935867 env[1750]: time="2023-10-02T19:43:51.935246746Z" level=info msg="Container to stop \"590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:43:51.938065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91-shm.mount: Deactivated successfully. Oct 2 19:43:51.958000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:43:51.959322 systemd[1]: cri-containerd-9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91.scope: Deactivated successfully. Oct 2 19:43:51.961855 kernel: kauditd_printk_skb: 162 callbacks suppressed Oct 2 19:43:51.961982 kernel: audit: type=1334 audit(1696275831.958:788): prog-id=91 op=UNLOAD Oct 2 19:43:51.965000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:43:51.970411 kernel: audit: type=1334 audit(1696275831.965:789): prog-id=94 op=UNLOAD Oct 2 19:43:51.995323 env[1750]: time="2023-10-02T19:43:51.995242858Z" level=info msg="StopContainer for \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\" with timeout 30 (s)" Oct 2 19:43:51.995884 env[1750]: time="2023-10-02T19:43:51.995827839Z" level=info msg="Stop container \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\" with signal terminated" Oct 2 19:43:52.023377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91-rootfs.mount: Deactivated successfully. Oct 2 19:43:52.039607 systemd[1]: cri-containerd-ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb.scope: Deactivated successfully. Oct 2 19:43:52.043595 kernel: audit: type=1334 audit(1696275832.039:790): prog-id=95 op=UNLOAD Oct 2 19:43:52.039000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:43:52.043000 audit: BPF prog-id=98 op=UNLOAD Oct 2 19:43:52.047343 kernel: audit: type=1334 audit(1696275832.043:791): prog-id=98 op=UNLOAD Oct 2 19:43:52.048791 env[1750]: time="2023-10-02T19:43:52.048703980Z" level=info msg="shim disconnected" id=9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91 Oct 2 19:43:52.048791 env[1750]: time="2023-10-02T19:43:52.048786900Z" level=warning msg="cleaning up after shim disconnected" id=9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91 namespace=k8s.io Oct 2 19:43:52.049063 env[1750]: time="2023-10-02T19:43:52.048811933Z" level=info msg="cleaning up dead shim" Oct 2 19:43:52.080780 env[1750]: time="2023-10-02T19:43:52.080713157Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3217 runtime=io.containerd.runc.v2\n" Oct 2 19:43:52.081772 env[1750]: time="2023-10-02T19:43:52.081703202Z" level=info msg="TearDown network for sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" successfully" Oct 2 19:43:52.082057 env[1750]: time="2023-10-02T19:43:52.081989741Z" level=info msg="StopPodSandbox for \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" returns successfully" Oct 2 19:43:52.113838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb-rootfs.mount: Deactivated successfully. Oct 2 19:43:52.128389 env[1750]: time="2023-10-02T19:43:52.127131915Z" level=info msg="shim disconnected" id=ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb Oct 2 19:43:52.128965 env[1750]: time="2023-10-02T19:43:52.128908292Z" level=warning msg="cleaning up after shim disconnected" id=ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb namespace=k8s.io Oct 2 19:43:52.129243 env[1750]: time="2023-10-02T19:43:52.129197914Z" level=info msg="cleaning up dead shim" Oct 2 19:43:52.156643 env[1750]: time="2023-10-02T19:43:52.156578755Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3244 runtime=io.containerd.runc.v2\n" Oct 2 19:43:52.163265 env[1750]: time="2023-10-02T19:43:52.163160386Z" level=info msg="StopContainer for \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\" returns successfully" Oct 2 19:43:52.164524 env[1750]: time="2023-10-02T19:43:52.164420974Z" level=info msg="StopPodSandbox for \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\"" Oct 2 19:43:52.164713 env[1750]: time="2023-10-02T19:43:52.164600028Z" level=info msg="Container to stop \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:43:52.167391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c-shm.mount: Deactivated successfully. Oct 2 19:43:52.190278 systemd[1]: cri-containerd-21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c.scope: Deactivated successfully. Oct 2 19:43:52.189000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:43:52.194338 kernel: audit: type=1334 audit(1696275832.189:792): prog-id=87 op=UNLOAD Oct 2 19:43:52.195000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:43:52.201205 kernel: audit: type=1334 audit(1696275832.195:793): prog-id=90 op=UNLOAD Oct 2 19:43:52.201369 kubelet[2207]: I1002 19:43:52.200356 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-cgroup\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201369 kubelet[2207]: I1002 19:43:52.200417 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-lib-modules\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201369 kubelet[2207]: I1002 19:43:52.200469 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-config-path\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201369 kubelet[2207]: I1002 19:43:52.200509 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-bpf-maps\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201369 kubelet[2207]: I1002 19:43:52.200549 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-host-proc-sys-net\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201369 kubelet[2207]: I1002 19:43:52.200597 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/791452ab-79ca-4aeb-b69a-946b9bd5f34f-clustermesh-secrets\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201828 kubelet[2207]: I1002 19:43:52.200638 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-hostproc\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201828 kubelet[2207]: I1002 19:43:52.200679 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/791452ab-79ca-4aeb-b69a-946b9bd5f34f-hubble-tls\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201828 kubelet[2207]: I1002 19:43:52.200752 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-host-proc-sys-kernel\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201828 kubelet[2207]: I1002 19:43:52.200791 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cni-path\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201828 kubelet[2207]: I1002 19:43:52.200829 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-etc-cni-netd\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.201828 kubelet[2207]: I1002 19:43:52.200869 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-xtables-lock\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.202290 kubelet[2207]: I1002 19:43:52.200914 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-ipsec-secrets\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.202290 kubelet[2207]: I1002 19:43:52.200952 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-run\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.202290 kubelet[2207]: I1002 19:43:52.200994 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krnhw\" (UniqueName: \"kubernetes.io/projected/791452ab-79ca-4aeb-b69a-946b9bd5f34f-kube-api-access-krnhw\") pod \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\" (UID: \"791452ab-79ca-4aeb-b69a-946b9bd5f34f\") " Oct 2 19:43:52.202290 kubelet[2207]: I1002 19:43:52.201342 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:52.202290 kubelet[2207]: I1002 19:43:52.201427 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:52.202290 kubelet[2207]: W1002 19:43:52.201717 2207 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/791452ab-79ca-4aeb-b69a-946b9bd5f34f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:43:52.204679 kubelet[2207]: I1002 19:43:52.204578 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:52.204872 kubelet[2207]: I1002 19:43:52.204697 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:52.206447 kubelet[2207]: I1002 19:43:52.206361 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cni-path" (OuterVolumeSpecName: "cni-path") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:52.206645 kubelet[2207]: I1002 19:43:52.206469 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-hostproc" (OuterVolumeSpecName: "hostproc") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:52.207024 kubelet[2207]: I1002 19:43:52.206955 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:52.207243 kubelet[2207]: I1002 19:43:52.207050 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:52.207243 kubelet[2207]: I1002 19:43:52.207097 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:52.209998 kubelet[2207]: I1002 19:43:52.209906 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:43:52.210231 kubelet[2207]: I1002 19:43:52.210046 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:52.233486 systemd[1]: var-lib-kubelet-pods-791452ab\x2d79ca\x2d4aeb\x2db69a\x2d946b9bd5f34f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkrnhw.mount: Deactivated successfully. Oct 2 19:43:52.239568 kubelet[2207]: I1002 19:43:52.239501 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791452ab-79ca-4aeb-b69a-946b9bd5f34f-kube-api-access-krnhw" (OuterVolumeSpecName: "kube-api-access-krnhw") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "kube-api-access-krnhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:52.247461 kubelet[2207]: I1002 19:43:52.247367 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791452ab-79ca-4aeb-b69a-946b9bd5f34f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:52.247658 kubelet[2207]: I1002 19:43:52.247588 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791452ab-79ca-4aeb-b69a-946b9bd5f34f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:43:52.249370 kubelet[2207]: I1002 19:43:52.249288 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "791452ab-79ca-4aeb-b69a-946b9bd5f34f" (UID: "791452ab-79ca-4aeb-b69a-946b9bd5f34f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:43:52.285721 env[1750]: time="2023-10-02T19:43:52.285637189Z" level=info msg="shim disconnected" id=21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c Oct 2 19:43:52.285996 env[1750]: time="2023-10-02T19:43:52.285724117Z" level=warning msg="cleaning up after shim disconnected" id=21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c namespace=k8s.io Oct 2 19:43:52.285996 env[1750]: time="2023-10-02T19:43:52.285749306Z" level=info msg="cleaning up dead shim" Oct 2 19:43:52.301826 kubelet[2207]: I1002 19:43:52.301744 2207 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-hostproc\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.301826 kubelet[2207]: I1002 19:43:52.301819 2207 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/791452ab-79ca-4aeb-b69a-946b9bd5f34f-hubble-tls\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302130 kubelet[2207]: I1002 19:43:52.301850 2207 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-host-proc-sys-kernel\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302130 kubelet[2207]: I1002 19:43:52.301874 2207 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cni-path\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302130 kubelet[2207]: I1002 19:43:52.301902 2207 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-etc-cni-netd\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302130 kubelet[2207]: I1002 19:43:52.301925 2207 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-xtables-lock\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302130 kubelet[2207]: I1002 19:43:52.301952 2207 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-ipsec-secrets\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302130 kubelet[2207]: I1002 19:43:52.301978 2207 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-run\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302130 kubelet[2207]: I1002 19:43:52.302001 2207 reconciler.go:399] "Volume detached for volume \"kube-api-access-krnhw\" (UniqueName: \"kubernetes.io/projected/791452ab-79ca-4aeb-b69a-946b9bd5f34f-kube-api-access-krnhw\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302130 kubelet[2207]: I1002 19:43:52.302024 2207 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-lib-modules\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302697 kubelet[2207]: I1002 19:43:52.302051 2207 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-config-path\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302697 kubelet[2207]: I1002 19:43:52.302074 2207 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-bpf-maps\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302697 kubelet[2207]: I1002 19:43:52.302097 2207 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-cilium-cgroup\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302697 kubelet[2207]: I1002 19:43:52.302121 2207 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/791452ab-79ca-4aeb-b69a-946b9bd5f34f-host-proc-sys-net\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.302697 kubelet[2207]: I1002 19:43:52.302182 2207 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/791452ab-79ca-4aeb-b69a-946b9bd5f34f-clustermesh-secrets\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.316865 env[1750]: time="2023-10-02T19:43:52.316787985Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3279 runtime=io.containerd.runc.v2\n" Oct 2 19:43:52.317623 env[1750]: time="2023-10-02T19:43:52.317543825Z" level=info msg="TearDown network for sandbox \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\" successfully" Oct 2 19:43:52.317623 env[1750]: time="2023-10-02T19:43:52.317618885Z" level=info msg="StopPodSandbox for \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\" returns successfully" Oct 2 19:43:52.405741 kubelet[2207]: I1002 19:43:52.402719 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/322f9277-7f70-4d2f-9338-46c47013693a-cilium-config-path\") pod \"322f9277-7f70-4d2f-9338-46c47013693a\" (UID: \"322f9277-7f70-4d2f-9338-46c47013693a\") " Oct 2 19:43:52.405741 kubelet[2207]: I1002 19:43:52.402809 2207 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d78qz\" (UniqueName: \"kubernetes.io/projected/322f9277-7f70-4d2f-9338-46c47013693a-kube-api-access-d78qz\") pod \"322f9277-7f70-4d2f-9338-46c47013693a\" (UID: \"322f9277-7f70-4d2f-9338-46c47013693a\") " Oct 2 19:43:52.405741 kubelet[2207]: W1002 19:43:52.403750 2207 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/322f9277-7f70-4d2f-9338-46c47013693a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:43:52.413001 kubelet[2207]: I1002 19:43:52.412886 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322f9277-7f70-4d2f-9338-46c47013693a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "322f9277-7f70-4d2f-9338-46c47013693a" (UID: "322f9277-7f70-4d2f-9338-46c47013693a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:43:52.415036 kubelet[2207]: I1002 19:43:52.414958 2207 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322f9277-7f70-4d2f-9338-46c47013693a-kube-api-access-d78qz" (OuterVolumeSpecName: "kube-api-access-d78qz") pod "322f9277-7f70-4d2f-9338-46c47013693a" (UID: "322f9277-7f70-4d2f-9338-46c47013693a"). InnerVolumeSpecName "kube-api-access-d78qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:52.503424 kubelet[2207]: I1002 19:43:52.503362 2207 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/322f9277-7f70-4d2f-9338-46c47013693a-cilium-config-path\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.503634 kubelet[2207]: I1002 19:43:52.503447 2207 reconciler.go:399] "Volume detached for volume \"kube-api-access-d78qz\" (UniqueName: \"kubernetes.io/projected/322f9277-7f70-4d2f-9338-46c47013693a-kube-api-access-d78qz\") on node \"172.31.27.230\" DevicePath \"\"" Oct 2 19:43:52.844064 kubelet[2207]: I1002 19:43:52.844010 2207 scope.go:115] "RemoveContainer" containerID="ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb" Oct 2 19:43:52.853880 systemd[1]: Removed slice kubepods-besteffort-pod322f9277_7f70_4d2f_9338_46c47013693a.slice. Oct 2 19:43:52.857877 env[1750]: time="2023-10-02T19:43:52.856905283Z" level=info msg="RemoveContainer for \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\"" Oct 2 19:43:52.862403 env[1750]: time="2023-10-02T19:43:52.862233933Z" level=info msg="RemoveContainer for \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\" returns successfully" Oct 2 19:43:52.863078 kubelet[2207]: I1002 19:43:52.863040 2207 scope.go:115] "RemoveContainer" containerID="ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb" Oct 2 19:43:52.864075 env[1750]: time="2023-10-02T19:43:52.863948762Z" level=error msg="ContainerStatus for \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\": not found" Oct 2 19:43:52.865238 kubelet[2207]: E1002 19:43:52.865183 2207 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\": not found" containerID="ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb" Oct 2 19:43:52.865453 kubelet[2207]: I1002 19:43:52.865257 2207 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb} err="failed to get container status \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\": not found" Oct 2 19:43:52.865453 kubelet[2207]: I1002 19:43:52.865287 2207 scope.go:115] "RemoveContainer" containerID="590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70" Oct 2 19:43:52.868387 systemd[1]: Removed slice kubepods-burstable-pod791452ab_79ca_4aeb_b69a_946b9bd5f34f.slice. Oct 2 19:43:52.870384 env[1750]: time="2023-10-02T19:43:52.870308102Z" level=info msg="RemoveContainer for \"590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70\"" Oct 2 19:43:52.875052 env[1750]: time="2023-10-02T19:43:52.874972787Z" level=info msg="RemoveContainer for \"590a8508d500471264f5cfa303a33ea8540118fc8ca500e5291843b27313bb70\" returns successfully" Oct 2 19:43:52.905821 kubelet[2207]: E1002 19:43:52.905754 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:52.937939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c-rootfs.mount: Deactivated successfully. Oct 2 19:43:52.938125 systemd[1]: var-lib-kubelet-pods-322f9277\x2d7f70\x2d4d2f\x2d9338\x2d46c47013693a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd78qz.mount: Deactivated successfully. Oct 2 19:43:52.938323 systemd[1]: var-lib-kubelet-pods-791452ab\x2d79ca\x2d4aeb\x2db69a\x2d946b9bd5f34f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:43:52.938464 systemd[1]: var-lib-kubelet-pods-791452ab\x2d79ca\x2d4aeb\x2db69a\x2d946b9bd5f34f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:43:52.938604 systemd[1]: var-lib-kubelet-pods-791452ab\x2d79ca\x2d4aeb\x2db69a\x2d946b9bd5f34f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:43:53.106685 env[1750]: time="2023-10-02T19:43:53.106400192Z" level=info msg="StopPodSandbox for \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\"" Oct 2 19:43:53.106685 env[1750]: time="2023-10-02T19:43:53.106543317Z" level=info msg="TearDown network for sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" successfully" Oct 2 19:43:53.106685 env[1750]: time="2023-10-02T19:43:53.106606462Z" level=info msg="StopPodSandbox for \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" returns successfully" Oct 2 19:43:53.109522 env[1750]: time="2023-10-02T19:43:53.108832879Z" level=info msg="StopContainer for \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\" with timeout 1 (s)" Oct 2 19:43:53.109522 env[1750]: time="2023-10-02T19:43:53.108921680Z" level=error msg="StopContainer for \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\": not found" Oct 2 19:43:53.110209 kubelet[2207]: E1002 19:43:53.110114 2207 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb\": not found" containerID="ae442c37eda78f9ea0ff0f46d0b3a5500fe98932d28ce843ea0112c5c786a0fb" Oct 2 19:43:53.112749 kubelet[2207]: I1002 19:43:53.111526 2207 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=322f9277-7f70-4d2f-9338-46c47013693a path="/var/lib/kubelet/pods/322f9277-7f70-4d2f-9338-46c47013693a/volumes" Oct 2 19:43:53.112920 env[1750]: time="2023-10-02T19:43:53.111267258Z" level=info msg="StopPodSandbox for \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\"" Oct 2 19:43:53.112920 env[1750]: time="2023-10-02T19:43:53.112501805Z" level=info msg="TearDown network for sandbox \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\" successfully" Oct 2 19:43:53.112920 env[1750]: time="2023-10-02T19:43:53.112579026Z" level=info msg="StopPodSandbox for \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\" returns successfully" Oct 2 19:43:53.113181 kubelet[2207]: I1002 19:43:53.112934 2207 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=791452ab-79ca-4aeb-b69a-946b9bd5f34f path="/var/lib/kubelet/pods/791452ab-79ca-4aeb-b69a-946b9bd5f34f/volumes" Oct 2 19:43:53.906899 kubelet[2207]: E1002 19:43:53.906848 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:54.908618 kubelet[2207]: E1002 19:43:54.908511 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:55.909298 kubelet[2207]: E1002 19:43:55.909221 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:56.608848 kubelet[2207]: E1002 19:43:56.608777 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:56.872385 kubelet[2207]: E1002 19:43:56.872244 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:56.909878 kubelet[2207]: E1002 19:43:56.909722 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:57.910891 kubelet[2207]: E1002 19:43:57.910843 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:58.912243 kubelet[2207]: E1002 19:43:58.912165 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:59.912572 kubelet[2207]: E1002 19:43:59.912500 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:00.912931 kubelet[2207]: E1002 19:44:00.912881 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:01.709525 amazon-ssm-agent[1722]: 2023-10-02 19:44:01 INFO Backing off health check to every 600 seconds for 1800 seconds. Oct 2 19:44:01.810888 amazon-ssm-agent[1722]: 2023-10-02 19:44:01 ERROR Health ping failed with error - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-07bd67749a50f6f38 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-07bd67749a50f6f38 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:44:01.810888 amazon-ssm-agent[1722]: status code: 400, request id: 3698875e-c677-4c9d-aef1-b13b3430f294 Oct 2 19:44:01.874342 kubelet[2207]: E1002 19:44:01.874287 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:01.914782 kubelet[2207]: E1002 19:44:01.914698 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:02.915782 kubelet[2207]: E1002 19:44:02.915707 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:03.916857 kubelet[2207]: E1002 19:44:03.916780 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:04.918871 kubelet[2207]: E1002 19:44:04.918807 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:05.920714 kubelet[2207]: E1002 19:44:05.920663 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:06.875213 kubelet[2207]: E1002 19:44:06.875179 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:06.922473 kubelet[2207]: E1002 19:44:06.922426 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:07.923825 kubelet[2207]: E1002 19:44:07.923752 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:08.924581 kubelet[2207]: E1002 19:44:08.924512 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:09.925051 kubelet[2207]: E1002 19:44:09.924977 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:10.925989 kubelet[2207]: E1002 19:44:10.925935 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:11.876667 kubelet[2207]: E1002 19:44:11.876633 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:11.927410 kubelet[2207]: E1002 19:44:11.927362 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:12.928968 kubelet[2207]: E1002 19:44:12.928883 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:13.929506 kubelet[2207]: E1002 19:44:13.929436 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:14.194873 kubelet[2207]: E1002 19:44:14.194775 2207 controller.go:187] failed to update lease, error: Put "https://172.31.17.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.230?timeout=10s": context deadline exceeded Oct 2 19:44:14.930653 kubelet[2207]: E1002 19:44:14.930586 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:15.931284 kubelet[2207]: E1002 19:44:15.931210 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:16.608884 kubelet[2207]: E1002 19:44:16.608833 2207 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:16.688859 env[1750]: time="2023-10-02T19:44:16.688779104Z" level=info msg="StopPodSandbox for \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\"" Oct 2 19:44:16.689521 env[1750]: time="2023-10-02T19:44:16.688990522Z" level=info msg="TearDown network for sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" successfully" Oct 2 19:44:16.689521 env[1750]: time="2023-10-02T19:44:16.689087518Z" level=info msg="StopPodSandbox for \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" returns successfully" Oct 2 19:44:16.690060 env[1750]: time="2023-10-02T19:44:16.689984382Z" level=info msg="RemovePodSandbox for \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\"" Oct 2 19:44:16.690317 env[1750]: time="2023-10-02T19:44:16.690069198Z" level=info msg="Forcibly stopping sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\"" Oct 2 19:44:16.690317 env[1750]: time="2023-10-02T19:44:16.690278732Z" level=info msg="TearDown network for sandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" successfully" Oct 2 19:44:16.695471 env[1750]: time="2023-10-02T19:44:16.695337912Z" level=info msg="RemovePodSandbox \"9d24d96be024069fd7e6cd700d0f200feeac63f96de2a30eb6f6ffdf3c699e91\" returns successfully" Oct 2 19:44:16.696086 env[1750]: time="2023-10-02T19:44:16.696021078Z" level=info msg="StopPodSandbox for \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\"" Oct 2 19:44:16.696275 env[1750]: time="2023-10-02T19:44:16.696194467Z" level=info msg="TearDown network for sandbox \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\" successfully" Oct 2 19:44:16.696353 env[1750]: time="2023-10-02T19:44:16.696274868Z" level=info msg="StopPodSandbox for \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\" returns successfully" Oct 2 19:44:16.697080 env[1750]: time="2023-10-02T19:44:16.697008278Z" level=info msg="RemovePodSandbox for \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\"" Oct 2 19:44:16.697270 env[1750]: time="2023-10-02T19:44:16.697073474Z" level=info msg="Forcibly stopping sandbox \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\"" Oct 2 19:44:16.697270 env[1750]: time="2023-10-02T19:44:16.697230100Z" level=info msg="TearDown network for sandbox \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\" successfully" Oct 2 19:44:16.702262 env[1750]: time="2023-10-02T19:44:16.702186259Z" level=info msg="RemovePodSandbox \"21a80bd2f0c609363ff6908d0ea36596ec8add1b887e213a1ac503c3531e236c\" returns successfully" Oct 2 19:44:16.716680 kubelet[2207]: W1002 19:44:16.716625 2207 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:44:16.878703 kubelet[2207]: E1002 19:44:16.877910 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:16.932354 kubelet[2207]: E1002 19:44:16.932308 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:17.479933 kubelet[2207]: E1002 19:44:17.479875 2207 controller.go:187] failed to update lease, error: Put "https://172.31.17.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.230?timeout=10s": unexpected EOF Oct 2 19:44:17.480783 kubelet[2207]: E1002 19:44:17.480735 2207 controller.go:187] failed to update lease, error: Put "https://172.31.17.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.230?timeout=10s": dial tcp 172.31.17.116:6443: connect: connection refused Oct 2 19:44:17.481629 kubelet[2207]: E1002 19:44:17.481558 2207 controller.go:187] failed to update lease, error: Put "https://172.31.17.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.230?timeout=10s": dial tcp 172.31.17.116:6443: connect: connection refused Oct 2 19:44:17.482261 kubelet[2207]: E1002 19:44:17.482179 2207 controller.go:187] failed to update lease, error: Put "https://172.31.17.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.230?timeout=10s": dial tcp 172.31.17.116:6443: connect: connection refused Oct 2 19:44:17.482261 kubelet[2207]: I1002 19:44:17.482254 2207 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Oct 2 19:44:17.482976 kubelet[2207]: E1002 19:44:17.482901 2207 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.17.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.230?timeout=10s": dial tcp 172.31.17.116:6443: connect: connection refused Oct 2 19:44:17.684222 kubelet[2207]: E1002 19:44:17.684127 2207 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.17.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.230?timeout=10s": dial tcp 172.31.17.116:6443: connect: connection refused Oct 2 19:44:17.934024 kubelet[2207]: E1002 19:44:17.933958 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:18.086053 kubelet[2207]: E1002 19:44:18.085968 2207 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.17.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.230?timeout=10s": dial tcp 172.31.17.116:6443: connect: connection refused Oct 2 19:44:18.935049 kubelet[2207]: E1002 19:44:18.934974 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:19.935606 kubelet[2207]: E1002 19:44:19.935560 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:20.937054 kubelet[2207]: E1002 19:44:20.936979 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:21.880478 kubelet[2207]: E1002 19:44:21.880425 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:21.937432 kubelet[2207]: E1002 19:44:21.937386 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:22.939085 kubelet[2207]: E1002 19:44:22.939039 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:23.940782 kubelet[2207]: E1002 19:44:23.940735 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:24.942067 kubelet[2207]: E1002 19:44:24.942022 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:25.943838 kubelet[2207]: E1002 19:44:25.943769 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:26.881361 kubelet[2207]: E1002 19:44:26.881301 2207 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:26.944340 kubelet[2207]: E1002 19:44:26.944256 2207 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"